Sample records for point-spread function psf

LiDAR is an efficient optical remote sensing technology that has application in geography, forestry, and defense. The effectiveness is often limited by signal-to-noise ratio (SNR). Geiger mode avalanche photodiode (APD) detectors are able to operate above critical voltage, and a single photoelectron can initiate the current surge, making the device very sensitive. These advantages come at the expense of requiring computationally intensive noise filtering techniques. Noise is a problem which affects the imaging system and reduces the capability. Common noise-reduction algorithms have drawbacks such as over aggressive filtering, or decimating in order to improve quality and performance. In recent years, there has been growing interest on GPUs (Graphics Processing Units) for their ability to perform powerful massive parallel processing. In this paper, we leverage this capability to reduce the processing latency. The PointSpreadFunction (PSF) filter algorithm is a local spatial measure that has been GPGPU accelerated. The idea is to use a kernel density estimation technique for point clustering. We associate a local likelihood measure with every point of the input data capturing the probability that a 3D point is true target-return photons or noise (background photons, dark-current). This process suppresses noise and allows for detection of outliers. We apply this approach to the LiDAR noise filtering problem for which we have recognized a speed-up factor of 30-50 times compared to traditional sequential CPU implementation.

In recent years, with the development of the Flat-Field Holographic Concave Grating, they are adopted by all kinds of UV spectrometers. By means of single optical surface, the Flat-Field Holographic Concave Grating can implement dispersion and imaging that make the UV spectrometer system design quite compact. However, the calibration of the Flat-Field Holographic Concave Grating is very difficult. Various factors make its imaging quality difficult to be guaranteed. So we have to process the spectrum signal with signal restoration before using it. Guiding by the theory of signals and systems, and after a series of experiments, we found that our UV spectrometer system is a Linear Space- Variant System. It means that we have to measure PSF of every pixel of the system which contains thousands of pixels. Obviously, that's a large amount of calculation .For dealing with this problem, we proposes a novel signal restoration method. This method divides the system into several Linear Space-Invariant subsystems and then makes signal restoration with PSFs. Our experiments turn out that this method is effective and inexpensive.

This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a pointspreadfunction (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

The maximum-entropy method (MEM) has been applied for the deconvolution of the point-spreadfunction (PSF) of two-dimensional X-ray detectors. The method is robust, model and image independent, and only depends on the correct description of the two-dimensional point-spreadfunction and gain factor

The intensity levels in a three-dimensional (3D) reconstruction, obtained by electron tomography, can be influenced by several experimental imperfections. Such artifacts will hamper a quantitative interpretation of the results. In this paper, we will correct for artificial intensity variations by determining the 3D pointspreadfunction (PSF) of a tomographic reconstruction based on high angle annular dark field scanning transmission electron microscopy. The large tails of the PSF cause an underestimation of the intensity of smaller particles, which in turn hampers an accurate radius estimate. Here, the error introduced by the PSF is quantified and corrected a posteriori. - Highlights: • Intensity variations in 3D reconstructions hamper quantification of tomography data. • These variations are corrected based on the pointspreadfunction. • The approach can be considered as an optimized route to 3D quantification.

The pointspreadfunction (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

The point-spreadfunction (PSF) is fundamental importance in estimating the imaging resolution in optical imaging systems. By using the Collins formula, a analytical imaging formula for ghost imaging system is obtained. The intensity fluctuation correlation function can be viewed as the convolution of the original object and a PSF. The imaging resolution is determined by the width of PSF. Based on the optical transfer matrix theory, we present the analytical formula describing the width of the PSF, by which one can estimate imaging resolution of a new-designed imaging scheme when compared with that of the existing imaging system. Several typical ghost imaging systems are chosen to verify experimentally our theoretical results.

and ρ is the radial coordinate in the aperture plane (u, v). 2.2.1 Computation of the transverse APSF and its irradiance. The amplitude impulse response of the considered aperture or the amplitude pointspreadfunction (APSF) is computed by operating the Fourier transform upon the aperture represented by eq. (8) to obtain ...

The pointspreadfunction obtainable in an astronomical instrument using CCD readout is limited by a number of factors, among them the lateral diffusion of charge before it is collected in the potential wells. They study this problem both theoretically and experimentally, with emphasis on the thick CCDs on high-resistivity n-type substrates being developed at Lawrence Berkeley National Laboratory

A Kelvin probe force microscopy (KPFM) image is sometimes difficult to interpret because it is a blurred representation of the true surface potential (SP) distribution of the materials under test. The reason for the blurring is that KPFM relies on the detection of electrostatic force, which is a long-range force compared to other surface forces. Usually, KPFM imaging model is described as the convolution of the true SP distribution of the sample with an intrinsic pointspreadfunction (PSF) of the measurement system. To restore the true SP signals from the blurred ones, the intrinsic PSF of the system is needed. In this work, we present a way to experimentally calibrate the PSF of the KPFM system. Taking the actual probe shape and experimental parameters into consideration, this calibration method leads to a more accurate PSF than the ones obtained from simulations. Moreover, a nonlinear reconstruction algorithm based on total variation (TV) regularization is applied to KPFM measurement to reverse the blurring caused by PSF during KPFM imaging process; as a result, noises are reduced and the fidelity of SP signals is improved.

This paper presents a method to predict the limit of possible resolution enhancement given a sequence of low resolution images. Three important parameters influence the outcome of this limit: the total PointSpreadFunction (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

This paper presents a method to predict the limit of possible resolution enhancement given a sequence of lowresolution images. Three important parameters influence the outcome of this limit: the total PointSpreadFunction (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spreadfunction (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

Extreme ultraviolet (EUV) lithography is under development for possible deployment at the 32-nm technology node. One active area of research in this field is the development of photoresists that can meet the stringent requirements (high resolution, high sensitivity, low LER, etc.) of lithography in this regime. In order to facilitate research in this and other areas related to EUV lithography, a printing station based upon the 0.3-NA Micro Exposure Tool (MET) optic was established at the Advanced Light Source, a synchrotron facility at Lawrence Berkeley National Laboratory. A resist modeling technique using a resist pointspreadfunction has been shown to have good agreement with experiments for certain EUV resists such as Shipley EUV-2D [2]. The resist pointspreadfunction is a two-dimensional function that, when convolved with the simulated aerial image for a given mask pattern and applied to a threshold function, gives a representation of the photoresist pattern remaining after development. The simplicity of this modeling approach makes it attractive for rapid modeling of photoresists for process development applications. In this work, the resist pointspreadfunctions for three current high-resolution EUV photoresists [Rohm and Haas EUV-2D, Rohm and Haas MET-1K (XP 3454C), and KRS] are extracted experimentally. This model is then used in combination with aerial image simulations (including effects of projection optic aberrations) to predict the resist pattern for a variety of test patterns. A comparison is made between these predictions and experimental results to evaluate the effectiveness of this modeling technique for newer high-resolution EUV resists

Full Text Available The precise knowledge of the pointspreadfunction is central for any imaging system characterization. In fluorescence microscopy, pointspreadfunction (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a pointspreadfunction (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)

Pointspreadfunction (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.

Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the pointspreadfunction (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.

Due to the application of mobile phone lens, the clear image for the different object distance from infinity to close-up creates a new bargaining. We found that wave-front coding applied to extend the depth of field may solve this problem. By means of using cubic phase mask (CPM), the blurred point-spreadfunction (PSF) is substantially invariant to defocus. Thus, the ideal hyperfocal distance condition can be satisfied as long as the constant blurred image can eventually be recovered by a simple digital signal processing. In this paper, we propose a different design method of computational imaging lens for mobile phone up to ideal depth of field based on PSF focus invariance. Because of the difficulty for comparing the similarity to different PSFs, we define a new metric, of correlation, to evaluate and optimize the PSF similarity. Besides, by means of adding the anti-symmetric free form phase plate at aperture stop and using the correlation and Strehl ratio as the two major optimization operands, we can get the optimum phase plate surface to achieve the required extended depth of field (EDoF). The resulted PSF on focal plane is significantly invariant to object distance varying from infinity to 10cm.

To solve the problem on inaccuracy when estimating the pointspreadfunction (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

Full Text Available To solve the problem on inaccuracy when estimating the pointspreadfunction (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the pointspreadfunction (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the pointspreadfunction (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

We investigate the pointspreadfunction of a multimode fiber. The distortion of the focal spot created on the fiber output facet is studied for a variety of the parameters. We develop a theoretical model of wavefront shaping through a multimode fiber and use it to confirm our experimental results

The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to detect photons with energies from Almost-Equal-To 20 MeV to >300 GeV. The pre-launch response functions of the LAT were determined through extensive Monte Carlo simulations and beam tests. The point-spreadfunction (PSF) characterizing the angular distribution of reconstructed photons as a function of energy and geometry in the detector is determined here from two years of on-orbit data by examining the distributions of {gamma} rays from pulsars and active galactic nuclei (AGNs). Above 3 GeV, the PSF is found to be broader than the pre-launch PSF. We checked for dependence of the PSF on the class of {gamma}-ray source and observation epoch and found none. We also investigated several possible spatial models for pair-halo emission around BL Lac AGNs. We found no evidence for a component with spatial extension larger than the PSF and set upper limits on the amplitude of halo emission in stacked images of low- and high-redshift BL Lac AGNs and the TeV blazars 1ES0229+200 and 1ES0347-121.

Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for pointspreadfunction (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape - to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.

One knows the imaging system's properties are central to the correct interpretation of any image. In a scanning electron microscope regions of different composition generally interact in a highly nonlinear way during signal generation. Using Monte Carlo simulations we found that in resin-embedded, heavy metal-stained biological specimens staining is sufficiently dilute to allow an approximately linear treatment. We then mapped point-spreadfunctions for backscattered-electron contrast, for primary energies of 3 and 7 keV and for different detector specifications. The point-spreadfunctions are surprisingly well confined (both laterally and in depth) compared even to the distribution of only those scattered electrons that leave the sample again.

We investigate the pointspreadfunction of a multimode fiber. The distortion of the focal spot created on the fiber output facet is studied for a variety of the parameters. We develop a theoretical model of wavefront shaping through a multimode fiber and use it to confirm our experimental results and analyze the nature of the focal distortions. We show that aberration-free imaging with a large field of view can be achieved by using an appropriate number of segments on the spatial light modulator during the wavefront-shaping procedure. The results describe aberration limits for imaging with multimode fibers as in, e.g., microendoscopy.

Full Text Available Abstract Background The pointspreadfunction (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spreadfunction (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.

In an effort to establish the imaging properties of a new type of polarized-light microscope, we recorded images of small, uniaxial, birefringent crystals. We show that the sequence of in-focus and out-of-focus images, the so-called point-spreadfunction, of a submicroscopic crystal can be used to measure the orientation of its optic axis in three-dimensional space. By analogy to conoscopic images out-of-focus images reveal the changes in relative phase shift between the extraordinary and the ordinary rays that propagate at different directions through the crystal. We also present simulated images of a pointlike anisotropic scattering particle and compare these with our experimental findings. The theoretical model is based on a complete vectorial theory for partial coherent imaging by use of polarized light and high-numerical-aperture lenses.

Full Text Available Objective(s: The present study was conducted to examine whether the standardized uptake value (SUV may be affected by the spatial position of a lesion in the radial direction on positron emission tomography (PET images, obtained via two methods based on time-of-flight (TOF reconstruction and pointspreadfunction (PSF. Methods: A cylinder phantom with the sphere (30mm diameter, located in the center was used in this study. Fluorine-18 fluorodeoxyglucose (18F-FDG concentrations of 5.3 kBq/ml and 21.2 kBq/ml were used for the background in the cylinder phantom and the central sphere respectively. By the use of TOF and PSF, SUVmax and SUVmean were determined while moving the phantom in a horizontal direction (X direction from the center of field of view (FOV: 0 mm at 50, 100, 150 and 200 mm positions, respectively. Furthermore, we examined 41 patients (23 male, 18 female, mean age: 68±11.2 years with lymph node tumors , who had undergone 18F-FDG PET examinations. The distance of each lymph node from FOV center was measured, based on the clinical images. Results: As the distance of a lesion from the FOV center exceeded 100 mm, the value of SUVmax, which was obtained with the cylinder phantom, was overestimated, while SUVmean by TOF and/or PSF was underestimated. Based on the clinical examinations, the average volume of interest was 8.5 cm3. Concomitant use of PSF increased SUVmax and SUVmean by 27.9% and 2.8%, respectively. However, size of VOI and distance from the FOV center did not affect SUVmax or SUVmean in clinical examinations. Conclusion: The reliability of SUV quantification by TOF and/or PSF decreased, when the tumor was located at a 100 mm distance (or farther from the center of FOV. In clinical examinations, if the lymph node was located within 100 mm distance from the center of FOV, SUV remained stable within a constantly increasing range by use of both TOF and PSF. We conclude that, use of both TOF and PSF may be helpful.

A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole pointspreadfunctions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.

New Bessel-series representations for the calculation of the diffraction integral are presented yielding the point-spreadfunction of the optical system, as occurs in the Nijboer-Zernike theory of aberrations. In this analysis one can allow an arbitrary aberration and a defocus part. The representations are presented in full detail for the cases of coma and astigmatism. The analysis leads to stably converging results in the case of large aberration or defocus values, while the applicability of the original Nijboer-Zernike theory is limited mainly to wave-front deviations well below the value of one wavelength. Because of its intrinsic speed, the analysis is well suited to supplement or to replace numerical calculations that are currently used in the fields of (scanning) microscopy, lithography, and astronomy. In a companion paper [J. Opt. Soc. Am. A 19, 860 (2002)], physical interpretations and applications in a lithographic context are presented, a convergence analysis is given, and a comparison is made with results obtained by using a numerical package.

Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the pointspreadfunction of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode–cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown. (paper)

We present the stray-light point-spreadfunctions (PSFs) and their inverses we characterized for the Atmospheric Imaging Assembly (AIA) EUV telescopes on board the Solar Dynamics Observatory (SDO) spacecraft. The inverse kernels are approximate inverses under convolution. Convolving the original Level 1 images with them produces images with improved stray-light characteristics. We demonstrate the usefulness of these PSFs by applying them to two specific cases: photometry and differential emission measure (DEM) analysis. The PSFs consist of a narrow Gaussian core, a diffraction component, and a diffuse component represented by the sum of a Gaussian-truncated Lorentzian and a shoulder Gaussian. We determined the diffraction term using the measured geometry of the diffraction pattern identified in flare images and the theoretically computed intensities of the principal maxima of the first few diffraction orders. To determine the diffuse component, we fitted its parameterized model using iterative forward-modeling of the lunar interior in the SDO/AIA images from the 2011 March 4 lunar transit. We find that deconvolution significantly improves the contrast in dark features such as miniature coronal holes, though the effect was marginal in bright features. On a percentage-scattering basis, the PSFs for SDO/AIA are better by a factor of two than that of the EUV telescope on board the Transition Region And Coronal Explorer mission. A preliminary analysis suggests that deconvolution alone does not affect DEM analysis of small coronal loop segments with suitable background subtraction. We include the derived PSFs and their inverses as supplementary digital materials.

We present the stray-light point-spreadfunctions (PSFs) and their inverses we characterized for the Atmospheric Imaging Assembly (AIA) EUV telescopes on board the Solar Dynamics Observatory (SDO) spacecraft. The inverse kernels are approximate inverses under convolution. Convolving the original Level 1 images with them produces images with improved stray-light characteristics. We demonstrate the usefulness of these PSFs by applying them to two specific cases: photometry and differential emission measure (DEM) analysis. The PSFs consist of a narrow Gaussian core, a diffraction component, and a diffuse component represented by the sum of a Gaussian-truncated Lorentzian and a shoulder Gaussian. We determined the diffraction term using the measured geometry of the diffraction pattern identified in flare images and the theoretically computed intensities of the principal maxima of the first few diffraction orders. To determine the diffuse component, we fitted its parameterized model using iterative forward-modeling of the lunar interior in the SDO/AIA images from the 2011 March 4 lunar transit. We find that deconvolution significantly improves the contrast in dark features such as miniature coronal holes, though the effect was marginal in bright features. On a percentage-scattering basis, the PSFs for SDO/AIA are better by a factor of two than that of the EUV telescope on board the Transition Region And Coronal Explorer mission. A preliminary analysis suggests that deconvolution alone does not affect DEM analysis of small coronal loop segments with suitable background subtraction. We include the derived PSFs and their inverses as supplementary digital materials.

While pointspreadfunction (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV. (author)

X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the pointspreadfunction (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made.

The pointspreadfunction is widely used to characterize the three-dimensional imaging capabilities of an optical system. Usually, attention is paid only to the intensity pointspreadfunction, whereas the phase pointspreadfunction is most often neglected because the phase information is not retrieved in noninterferometric imaging systems. However, phase pointspreadfunctions are needed to evaluate phase-sensitive imaging systems and we believe that phase data can play an essential role in the full aberrations' characterization. In this paper, standard diffraction models have been used for the computation of the complex amplitude pointspreadfunction. In particular, the Debye vectorial model has been used to compute the amplitude pointspreadfunction of x63/0.85 and x100/1.3 microscope objectives, exemplifying the phase pointspreadfunction specific for each polarization component of the electromagnetic field. The effect of aberrations on the phase pointspreadfunction is then analyzed for a microscope objective used under nondesigned conditions, by developing the Gibson model (Gibson & Lanni, 1991), modified to compute the three-dimensional amplitude pointspreadfunction in amplitude and phase. The results have revealed a novel anomalous phase behaviour in the presence of spherical aberration, providing access to the quantification of the aberrations. This work mainly proposes a method to measure the complex three-dimensional amplitude pointspreadfunction of an optical imaging system. The approach consists in measuring and interpreting the amplitude pointspreadfunction by evaluating in amplitude and phase the image of a single emitting point, a 60-nm-diameter tip of a Near Field Scanning Optical Microscopy fibre, with an original digital holographic experimental setup. A single hologram gives access to the transverse amplitude pointspreadfunction. The three-dimensional amplitude pointspreadfunction is obtained by performing an axial scan of the

We assess the validity of an extended Nijboer-Zernike approach [J. Opt. Soc. Am. A 19, 849 (2002)], based on ecently found Bessel-series representations of diffraction integrals comprising an arbitrary aberration and a defocus part, for the computation of optical point-spreadfunctions of circular, aberrated optical systems. These new series representations yield a flexible means to compute optical point-spreadfunctions, both accurately and efficiently, under defocus and aberration conditions that seem to cover almost all cases of practical interest. Because of the analytical nature of the formulas, there are no discretization effects limiting the accuracy, as opposed to the more commonly used numerical packages based on strictly numerical integration methods. Instead, we have an easily managed criterion, expressed in the number of terms to be included in the Bessel-series representations, guaranteeing the desired accuracy. For this reason, the analytical method can also serve as a calibration tool for the numerically based methods. The analysis is not limited to pointlike objects but can also be used for extended objects under various illumination conditions. The calculation schemes are simple and permit one to trace the relative strength of the various interfering complex-amplitude terms that contribute to the final image intensity function.

This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the pointspreadfunction. A theoretically calculated pointspreadfunction (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A pointspreadfunction measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of pointspreadfunction estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized pointspreadfunction derived from the same Z-stack to yield a pointspreadfunction of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the pointspreadfunction generated using the method presented in this paper (called the 'extracted PSF') to a synthetic pointspreadfunction. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted pointspreadfunction obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted pointspreadfunction compared to the synthetic pointspreadfunction indicating that the extracted pointspreadfunction is a better fit to the brightfield deconvolution model than the synthetic pointspreadfunction.

We present results of the pointspreadfunction (PSF) calibration of the hard X-ray optics of the Nuclear Spectroscopic Telescope Array (NuSTAR). Immediately post-launch, NuSTAR has observed bright point sources such as Cyg X-1, Vela X-1, and Her X-1 for the PSF calibration. We use the point source...

Full Text Available The PLATO space mission is designed to detect telluric planets in the habitable zone of solar type stars, and simultaneously characterise the host star using ultra high precision photometry. The photometry will be performed on board using weighted masks. However, to reach the required precision, corrections will have to be performed by the ground segment and will rely on precise knowledge of the instrument PSF (PointSpreadFunction. We here propose to model the PSF using a microscanning method.

Point-spreadfunction (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUVmean and SUVmax, including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUVmean bias in small tumours. Overall, the results indicate that exactly matched PSF

We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown pointspreadfunction (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

Accurate models of a telescope's pointspreadfunction are key to predicting its performance and extracting information from observations. Developed at STScI since 2010, WebbPSF is a flexible Python-based PSF simulation tool initially developed for JWST's imaging, spectroscopy, and coronagraphic instruments. We present improvements that allow this tool to simulate PSFs for the WFIRST wide-field imaging mode, as well as additional spectroscopy modes for the NIRSpec, MIRI, and NIRISS instruments on JWST. The WFIRST wide field imaging mode is also the first WebbPSF model to simulate PSF variation across the entire field of view. These variations are included in the Fraunhofer-domain PSF calculation as Zernike polynomial terms up to Z22. As WFIRST is still early in its development, high-spatial-frequency wavefront errors (beyond Z22) are incorporated using an optical path difference map from another notable 2.4 meter space telescope. Common infrastructure to build simulated optical instruments has been made available as POPPY (Physical Optics Propagation in Python), an open-source library that has seen contributions from users in astronomy and beyond.

From a historical point of view, it was only through the advent of the CCD as a linear, high dynamic range panoramic detector that it became possible to overcome the source confusion problem for stellar photometry, e.g., in star clusters or nearby galaxies. The ability of accurately sampling the point-spread-function (PSF) in two dimensions and to use it as a template for fitting severely overlapping stellar images is of fundamental importance for crowded-field photometry, and has thus become the foundation for the determination of accurate color-magnitude diagrams of globular clusters and the study of resolved stellar populations in nearby galaxies. Analogous to CCDs, the introduction of integral field spectrographs has opened a new avenue for crowded-field 3D spectroscopy, which benefits in the same way from PSF-fitting techniques as CCD photometry does. This paper presents first experience with sampling the PSF in 3D spectroscopy, reviews the effects of atmospheric refraction, discusses background subtraction problems, and presents several science applications as obtained from observations with the PMAS instrument at Calar Alto Observatory.

of phosphorylation, and IL-4 stimulation increased tyrosine phosphorylation of PSF and STAT6. Functional analysis demonstrated that ectopic expression of PSF resulted in inhibition of STAT6-mediated gene transcriptional activation and mRNA expression of Ig heavy chain germline Ig ε, while knockdown of PSF increased......Regulation of transcription requires cooperation between sequence specific transcription factors and numerous coregulatory proteins. In IL-4/IL-13 signaling several coactivators for STAT6 have been identified, but the molecular mechanisms of STAT6-mediated gene transcription are still not fully...... understood. Here we identified by proteomic approach that PTB-associated splicing factor (PSF) interacts with STAT6. In cells the interaction required IL-4 stimulation and was observed both with endogenous and ectopically expressed proteins. The ligand dependency of the interaction suggested involvement...

This paper investigates a new approach devoted to displacement vector estimation in ultrasound imaging. The main idea is to adapt the image formation to a given displacement estimationmethod to increase the precision of the estimation. The displacement is identified as the zero crossing...... of the phase of the complex cross-correlation between signals extracted from the lateral direction of the ultrasound RF image. For precise displacement estimation, a linearity of the phase slope is needed as well as a high phase slope. Consequently, a particular pointspreadfunction (PSF) dedicated...... to this estimator is designed. This PSF, showing oscillations in the lateral direction, leads to synthesis of lateral RF signals. The estimation is included in a 2-D displacement vector estimation method. The improvement of this approach is evaluated quantitatively by simulation studies. A comparison with a speckle...

Full Text Available Ultra-wide-field of view (UWFOV imaging systems are affected by various aberrations, most of which are highly angle-dependent. A description of UWFOV imaging systems, such as microscopy optics, security camera systems and other special space-variant imaging systems, is a difficult task that can be achieved by estimating the PointSpreadFunction (PSF of the system. This paper proposes a novel method for modeling the space-variant PSF of an imaging system using the Zernike polynomials wavefront description. The PSF estimation algorithm is based on obtaining field-dependent expansion coefficients of the Zernike polynomials by fitting real image data of the analyzed imaging system using an iterative approach in an initial estimate of the fitting parameters to ensure convergence robustness. The method is promising as an alternative to the standard approach based on Shack–Hartmann interferometry, since the estimate of the aberration coefficients is processed directly in the image plane. This approach is tested on simulated and laboratory-acquired image data that generally show good agreement. The resulting data are compared with the results of other modeling methods. The proposed PSF estimation method provides around 5% accuracy of the optical system model.

In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the pointspreadfunction are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper

In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the pointspreadfunction are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the pointspreadfunction are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

For a clear, well corrected imaging aperture in space, the point-spreadfunction (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

Two-dimensional echocardiography continues to be the most widely used modality for the assessment of cardiac function due to its effectiveness, ease of use, and low costs. Echocardiographic images are derived from the mechanical interaction between the ultrasound field and the contractile heart tissue. Previously, in [6], based on B-mode echocardiographic simulations, we showed that motion estimation errors are significantly higher in shift-varying simulations when compared to shift-invariant simulations. In order to ascertain the effect of the spatial variance of the Ultrasonic field pointspreadfunction (PSF) and the transducer geometry on motion estimation, in the current paper, several simple canonical cardiac motions such as translation in axial and horizontal direction, and out-of-plane motion were simulated and the motion estimation errors were calculated. For axial motions, the greatest angular errors occurred within the lateral regions of the image, irrespective of the motion estimation technique that was adopted. We hypothesize that the transducer geometry and the PSF spatial-variance were the underlying sources of error for the motion estimation methods. No similar conclusions could be made regarding motion estimation errors for azimuthal and out-of-plane ultrasound simulations.

... confocal scanning microscopes for the above-mentioned amplitude filters. These results of axial and lateral irradiances are graphically represented by constructing a computer program using MATLAB. The obtained results are compared with that obtained in case of circular, annular, and Martinez-Corral apodized aperture ...

scanning microscopes for the above-mentioned amplitude filters. These results of axial and lateral irradiances are graphically represented by constructing a computer program using MATLAB. The obtained results are compared with that obtained in case of circular, annular, and Martinez-Corral apodized aperture. Keywords ...

The goal of this program is to observe the center of Omega Cen {which has a nice flat distribution of reasonably-spaced-out stars} in order to construct a PSF model for ACS's three workhorse filters: F435W, F606W, and F814W. These also happen to be the three ACS filters that will be used in the Frontier-Field program. PI-Anderson will use the data to consturct an 9x10 array of fiducial PSFs that describe the static variation of the PSF across the frame for each filter. He will also provide some simple routines that the public can use to insert PSFs into images.The observations will dither the center of the cluster around in a circle with a radius of about 30" such that any single star never falls in the ACS gap more than once. This has the additional benefit that we can use this large dither to validate or improve the distortion solution at the same time we are solving for the PSF. We will get four exposures through each of the ACS filters. The exposure times for the three ACS filters {F435W, F606W, and F814W} were chosen to maximize the number of bright unsaturated stars while simultaneously minimizing the number of saturated stars present. To do this, we made sure that the SGB {which is where the LF rises precipitously} is just below the saturation level. We used archival images from GO-9444 and GO-10775 to ensure that 339s for F435W, 80s in F606W, and 90s in F814W is perfect for this.In addition to the ACS exposures, we also take parallels with WFC3/IR. These exposures will sample a field that is 6' off center. The core radius is 2.5', so this outer field should have a density that is 5x lower than at the center, meaning the typical star is maybe 2.5x farther away. This should compensate for the larger WFC3/IR pixels and will allow us to construct PSFs that are appropriate. We take a total of 32 WFC3/IR exposures, each with an exposure time of 103s, and divide these 32 exposures among the four FF WFC3/IR exposures: F105W, F125W, F140W, and F160W. We will use

Full Text Available The properties of UWFC (Ultra Wide-Field Camera astronomical systems along with specific visual data in astronomical images contribute to a comprehensive evaluation of the acquired image data. These systems contain many different kinds of optical aberrations which have a negatively effect on image quality and imaging system transfer characteristics, and reduce the precision of astronomical measurement. It is very important to figure two main questions out. At first: In which astrometric depend on optical aberrations? And at second: How optical aberrations affect the transfer characteristics of the whole optical system. If we define the PSF (PointSpreadFunction [2] of an optical system, we can use some suitable methods for restoring the original image. Optical aberration models for LSI/LSV (Linear Space Invariant/Variant [2] systems are presented in this paper. These models are based on Seidel and Zernike approximating polynomials [1]. Optical aberration models serve as suitable tool for estimating and fitting the wavefront aberration of a real optical system. Real data from the BOOTES (Burst Observer and Optical Transient Exploring System experiment is used for our simulations. Problems related to UWFC imaging systems, especially a restoration method in the presence of space variant PSF are described in this paper. A model of the space variant imaging system and partially of the space variant optical system has been implemented in MATLAB. The “brute force” method has been used for restoration of the testing images. The results of different deconvolution algorithms are demonstrated in this paper. This approach could help to improve the precision of astronomic measurements.

Large wideband two-dimensional (2-D) arrays are essential for high-resolution three-dimensional (3-D) ultrasound imaging. Since the tremendous element number of a full sampled large 2-D array is not affordable in any practical 3-D ultrasound imaging system, it is necessary to reduce the element number through sparse 2-D array design. Sparse array design requires that both the positions and weights of the array elements should be arbitrarily alterable. Hence a proper evaluation tool that can deal with arbitrary array is integral to optimizing the array structure and apodization function. It is known that pulse-echo pointspreadfunction (PSF) has been a common tool used to evaluate the performance of wideband arrays in ultrasound imaging all along, which also plays an important role in wideband ultrasound simulations. However, so far the conventional ultrasound simulation tools can only calculate pulse-echo PSF of arbitrary wideband arrays in the time domain because of the existence of nonuniform nodes in the spatial impulse response expressions, which obstructs their application of FFT to do fast computation of the time-domain convolutions. As a result, ultra-high time consumption of pulse-echo PSF computation of a large arbitrary wideband array hampers it to be taken as the evaluation tool by any stochastic optimization methods which need massive iterations in designing large sparse 2-D arrays. This paper aims to make available the pulse-echo PSF tool in designing large sparse 2-D arrays by proposing a fast computation method of far-field pulse-echo PSFs of arbitrary wideband arrays. In the paper, fast computation of wideband spatial impulse responses of a 2-D array is first realized in frequency domain by employing the nonuniform fast Fourier transform (NUFFT), under the point source assumption in far-field. On the basis of that, fast computation of time-domain convolutions is made possible by using FFT. In addition, a short inverse FFT (IFFT) is applied in

NASA's Kepler and K2 missions have impacted all areas of astrophysics in unique and important ways by delivering high-precision time series data on asteroids, stars, and galaxies. For example, both the official Kepler pipeline and the various community-owned pipelines have been successful at discovering a myriad of transiting exoplanets around a wide range of stellar types. However, the existing pipelines tend to focus on studying isolated stars using simple aperture photometry, and often perform sub-optimally in crowded fields where objects are blended. To address this issue, we present a PointSpreadFunction (PSF) photometry toolkit for Kepler and K2 which is able to extract light curves from crowded regions, such as the Beehive Cluster, the Lagoon Nebula, and the M67 globular cluster, which were all recently observed by Kepler. We present a detailed discussion on the theory, the practical use, and demonstrate our tool on various levels of crowding. Finally, we discuss the future use of the tool on data from the TESS mission. The code is open source and available on GitHub as part of the PyKE toolkit for Kepler/K2 data analysis.

Hematopoietic stem cells (HSCs) can survive long-term in a state of dormancy. Little is known about how histone deacetylase inhibitors (HDACi) affect HSC kinetics. Here, we use trichostatin A (TSA), a histone deacetylase inhibitor, to enforce histone acetylation and show that this suppresses cell cycle entry by dormant HSCs. Previously, we found that haploinsufficiency of PSF1, a DNA replication factor, led to attenuation of the bone marrow (BM) HSC pool size and lack of acute proliferation after 5-FU ablation. Because PSF1 protein is present in CD34 + transiently amplifying HSCs but not in CD34 − long-term reconstituting-HSCs which are resting in a dormant state, we analyzed the relationship between dormancy and PSF1 expression, and how a histone deacetylase inhibitor affects this. We found that CD34 + HSCs produce long functionalPSF1 (PSF1a) but CD34 − HSCs produce a shorter possibly non-functionalPSF1 (PSF1b, c, dominantly PSF1c). Using PSF1a-overexpressing NIH-3T3 cells in which the endogenous PSF1 promoter is suppressed, we found that TSA treatment promotes production of the shorter form of PSF1 possibly by inducing recruitment of E2F family factors upstream of the PSF1 transcription start site. Our data document one mechanism by which histone deacetylase inhibitors affect the dormancy of HSCs by regulating the DNA replication factor PSF1. - Highlights: • Hematopoetic stem cell dormancy is controlled by histone deacetylation inhibitors. • Dormancy of HSCs is associated with a shorter form of non-functionalPSF1. • Histone deacetylase inhibitors suppress PSF1 promoter activity

Hematopoietic stem cells (HSCs) can survive long-term in a state of dormancy. Little is known about how histone deacetylase inhibitors (HDACi) affect HSC kinetics. Here, we use trichostatin A (TSA), a histone deacetylase inhibitor, to enforce histone acetylation and show that this suppresses cell cycle entry by dormant HSCs. Previously, we found that haploinsufficiency of PSF1, a DNA replication factor, led to attenuation of the bone marrow (BM) HSC pool size and lack of acute proliferation after 5-FU ablation. Because PSF1 protein is present in CD34{sup +} transiently amplifying HSCs but not in CD34{sup −} long-term reconstituting-HSCs which are resting in a dormant state, we analyzed the relationship between dormancy and PSF1 expression, and how a histone deacetylase inhibitor affects this. We found that CD34{sup +} HSCs produce long functionalPSF1 (PSF1a) but CD34{sup −} HSCs produce a shorter possibly non-functionalPSF1 (PSF1b, c, dominantly PSF1c). Using PSF1a-overexpressing NIH-3T3 cells in which the endogenous PSF1 promoter is suppressed, we found that TSA treatment promotes production of the shorter form of PSF1 possibly by inducing recruitment of E2F family factors upstream of the PSF1 transcription start site. Our data document one mechanism by which histone deacetylase inhibitors affect the dormancy of HSCs by regulating the DNA replication factor PSF1. - Highlights: • Hematopoetic stem cell dormancy is controlled by histone deacetylation inhibitors. • Dormancy of HSCs is associated with a shorter form of non-functionalPSF1. • Histone deacetylase inhibitors suppress PSF1 promoter activity.

Kepler and K2 data analysis reported in the literature is mostly based on aperture photometry. Because of Kepler's large, undersampled pixels and the presence of nearby sources, aperture photometry is not always the ideal way to obtain high-precision photometry, and, because of this, the data set has not been fully exploited so far. We present a new method that builds on our experience with undersampled HST images. The method involves a point-spreadfunction (PSF) neighbour-subtraction and was specifically developed to exploit the huge potential offered by the K2 `super-stamps' covering the core of dense star clusters. Our test-bed targets were the NGC 2158 and M 35 regions observed during the K2 Campaign 0. We present our PSF modelling and demonstrate that, by using a high-angular-resolution input star list from the Asiago Schmidt telescope as the basis for PSF neighbour subtraction, we are able to reach magnitudes as faint as KP ≃ 24 with a photometric precision of 10 per cent over 6.5 h, even in the densest regions. At the bright end, our photometric precision reaches ˜30 parts per million. Our method leads to a considerable level of improvement at the faint magnitudes (KP ≳ 15.5) with respect to the classical aperture photometry. This improvement is more significant in crowded regions. We also extracted raw light curves of ˜60 000 stars and detrended them for systematic effects induced by spacecraft motion and other artefacts that harms K2 photometric precision. We present a list of 2133 variables.

Many applications can benefit from the use of pupil filters for controlling the light intensity distribution near the focus of an optical system. Most of the design methods for such filters are based on a second-order expansion of the PointSpreadFunction (PSF). Here, we present a new procedure for designing radially-symmetric pupil filters. It is more precise than previous procedures as it considers the exact expression of the PSF, expanded as a function of first-order Bessel functions. Furthermore, this new method presents other advantages: the height of the side lobes can be easily controlled, it allows the design of amplitude-only, phase-only or hybrid filters, and the coefficients of the PSF expansion can be directly related to filter parameters. Finally, our procedure allows the design of filters with very different behaviours and optimal performance.

FRIDA (inFRared Imager and Dissector for the Adaptive optics system of the Gran Telescopio Canarias) has been designed as a cryogenic and diffraction limited instrument that will offer broad and narrow band imaging and integral field spectroscopy (IFS). Both, the imaging mode and IFS observing modes will use the same Teledyne 2Kx2K detector. This instrument will be installed at Nasmyth B station, behind the GTC Adaptive Optics system (GTCAO). FRIDA will provide the IFS mode using a 30 slices Integral Field Unit (IFU). This IFU design is based on University of Florida FISICA where the mirror block arrays are diamond turned on monolithic metal blocks. FRIDA IFU is conformed mainly by 2 mirror blocks with 30 spherical mirrors each. The image slicing is performed by a block of 30 cylindrical mirrors each of 400 μm width. It also has a Schwarzschild relay based on two off axis spherical mirrors that adapts the GTCAO corrected PSF to the slicer mirrors dimensions. To readapt the sliced PSF to the spectrograph input numerical aperture the IFU has an afocal system of two parabolic off axis mirrors. The AO PSF is bigger than the slice mirror dimensions and this produces diffraction effects. These diffraction effects combined with the intrinsic IFU and spectrograph aberrations produce the final instrumental PSF of the IFS mode. In order to evaluate the instrumental PSF quality of the FRIDA IFS, modeling simulations were performed by the ZEMAX Physical Optics Propagation (POP) module. In this work the simulations are described and the PSF quality and uniformity on a reconstructed IFS image is evaluated. It is shown the PSF quality of the IFS mode including the instrument manufacturing tolerances fulfills the specifications.

Full Text Available The purpose of this work is to demonstrate the functionality and performance of a PSF-based geometric distortion correction for high-field functional animal EPI. The EPI method was extended to measure the PSF and a postprocessing chain was implemented in Matlab for offline distortion correction. The correction procedure was applied to phantom and in vivo imaging of mice and rats at 9.4T using different SE-EPI and DWI-EPI protocols. Results show the significant improvement in image quality for single- and multishot EPI. Using a reduced FOV in the PSF encoding direction clearly reduced the acquisition time for PSF data by an acceleration factor of 2 or 4, without affecting the correction quality.

Over the past decade, a growing population of planetary-mass companions ( 100 AU) from their host stars, challenging existing models of both star and planet formation. It is unclear whether these systems represent the low-mass extreme of stellar binary formation or the high-mass and wide-orbit extreme of planet formation theories, as various proposed formation pathways inadequately explain the physical and orbital aspects of these systems. Even so, determining which scenario best reproduces the observed characteristics of the PMCs will come once a statistically robust sample of directly-imaged PMCs are found and studied.We are developing an automated pipeline to search for wide-orbit PMCs to young stars in Spitzer/IRAC images. A Markov Chain Monte Carlo (MCMC) algorithm is the backbone of our novel pointspreadfunction (PSF) subtraction routine that efficiently creates and subtracts χ2-minimizing instrumental PSFs, simultaneously measuring astrometry and infrared photometry of these systems across the four IRAC channels (3.6 μm, 4.5 μm, 5.8 μm, and 8 μm). In this work, we present the results of a Spitzer/IRAC archival imaging study of 11 young, low-mass (0.044-0.88 M⊙ K3.5-M7.5) stars known to have faint, low-mass companions in 3 nearby star-forming regions (Chameleon, Taurus, and Upper Scorpius). We characterize the systems found to have low-mass companions with non-zero [I1] - [I4] colors, potentially signifying the presence of a circum(sub?)stellar disk. Plans for future pipeline improvements and paths forward will also be discussed. Once this computational foundation is optimized, the stage is set to quickly scour the nearby star-forming regions already imaged by Spitzer, identify potential candidates for further characterization with ground- or space-based telescopes, and increase the number of widely-separated PMCs known.

Full Text Available Several articles have looked at factors that affect the adjustments of pointspreads, based on hot hands or streaks, for smaller durations of time. This study examines these effects for 34 regular seasons in the National Basketball Association (NBA. Estimating a Seemingly Unrelated Regression model using all 34 seasons, all streaks significantly impacted pointspreads and difference in actual points. When estimating each season individually, differences emerged particularly examining winning and losing streaks of six games or more. The results indicate both the presence of momentum effects and the gambler’s fallacy.

Polysulfone and cellulose acetate are common material in separation. In this research, polysulfone/cellulose actetate (PSF/CA) blend membrane was prepared. The aim of this research was to study effect of evaporation time in casting of PSF/CA membrane and its performance in filtration. CA was obtained by acetylation process of bacterial cellulose (BC) from fermentation of coconut water. Fourier Transform Infra Red (FTIR) Spectroscopy was used to examine functional groups of BC, CA and commercial cellulose acetate. Subtitution of acetyl groups determined by titration method. Blend membranes were prepared through phase inversion technique in which composition of PSF/PEG/CA/NMP(%w) was 15/5/5/75. Polyethyleneglycol (PEG) and N-methyl-2-pyrrolidone (NMP) were act as pore forming agent and solvent, respectively. Variation of evaporation times were used as parameter to examine water uptake, flux, and morphology of PSF/CA blend membranes. FTIR spectra of CA show characteristic peak of acetyl group at 1220 cm-1 indicated that BC was acetylated succesfully. Degree of subtitution of BCA was found at 2.62. Highest water flux was performed at 2 bar obtained at 106.31 L.m-2.h-1 at 0 minute variation, and decrease as increasing evaporation time. Morphology of PSF/BCA blend membranes were investigated by Scanning Electron Microscopy (SEM) showed that porous asymetric membrane were formed.

The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope has a point-spreadfunction (PSF) with large tails, consisting of events affected by tracker inefficiencies, inactive volumes, and hard scattering; these tails can make source confusion a limiting factor. The parameter CTBCORE, available in the publicly available Extended Fermi LAT data (available at http://fermi.gsfc.nasa.gov/ssc/data/access/), estimates the quality of each event's direction reconstruction; by implementing a cut in this parameter, the tails of the PSF can be suppressed at the cost of losing effective area. We implement cuts on CTBCORE and present updated instrument response functions derived from the Fermi LAT data itself, along with all-sky maps generated with these cuts. Having shown the effectiveness of these cuts, especially at low energies, we encourage their use in analyses where angular resolution is more important than Poisson noise.

The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope has a point-spreadfunction (PSF) with large tails, consisting of events affected by tracker inefficiencies, inactive volumes, and hard scattering; these tails can make source confusion a limiting factor. The parameter CTBCORE, available in the publicly available Extended Fermi LAT data (available at http://fermi.gsfc.nasa.gov/ssc/data/access/), estimates the quality of each event's direction reconstruction; by implementing a cut in this parameter, the tails of the PSF can be suppressed at the cost of losing effective area. We implement cuts on CTBCORE and present updated instrument response functions derived from the Fermi LAT data itself, along with all-sky maps generated with these cuts. Having shown the effectiveness of these cuts, especially at low energies, we encourage their use in analyses where angular resolution is more important than Poisson noise.

In this paper, we explore the merit of calculating the geometrical optical transfer function (GOTF) in optical design by comparing the time to calculate it with the time to calculate the diffraction optical transfer function (DOTF). We determine the DOTF by numerical integration of the pupil function autocorrelation (that reduces to an integration of a complex exponential of the aberration difference function), 2D digital autocorrelation of the pupil function, and the Fourier transform (FT) of the point-spreadfunction (PSF); and we determine the GOTF by the FT of the geometrical PSF (that reduces to an integration over the pupil plane of a complex exponential that is a scalar product of the spatial frequency and transverse ray aberration vectors) and the FT of the spot diagram. Our starting point for calculating the DOTF is the wave aberrations of the system in its pupil plane, and the transverse ray aberrations in the image plane for the GOTF. Numerical results for primary aberrations and some typical imaging systems show that the direct numerical integrations are slow, but the GOTF calculation by a FT of the spot diagram is two or even three times slower than the DOTF calculation by an FT of the PSF, depending on the aberration. We conclude that the calculation of GOTF is, at best, an approximation of the DOTF and only for large aberrations; GOTF does not offer any advantage in the optical design process, and hence negates its utility.

discute la necesidad de elaboración de políticas de atención domiciliaria que contemplen las especificidades del municipio de São Paulo en sustitución aquellas focalizadas en grupo poblacionales específicos.The aim of this study was to understand how social and health inequalities are expressed in the health-disease profile of individuals with functional losses and dependence receiving home care by Family Health Care Program's teams in the administrative districts of the city of São Paulo. The districts were grouped according to the Social Exclusion Index through a cluster analysis, and a statistical description of the variables was developed in order to compare them. For the city as a whole was verified the prevalence of senior women with light functional losses and dependence requiring low complexity care, compatible with Primary Health Care. Districts with major social exclusion had a larger proportion of males less than 60 years old and children with severe disabilities who need care of greater complexity. The article debates the need for home care policies designed for the specificities of the city of São Paulo instead of policies focused on specific population groups.

The abundant heterogeneous nuclear ribonucleoprotein M (hnRNP M) is able to associate with early spliceosomes and to influence splicing patterns of specific pre-mRNAs. Here, by a combination of immunoprecipitation and pull-down assays, we have identified PSF (polypyrimidine tract-binding protein-associated splicing factor) and p54{sup nrb}, two highly related proteins involved in transcription and RNA processing, as new binding partners of hnRNP M. HnRNP M was found to co-localize with PSF within a subset of nuclear paraspeckles and to largely co-fractionate with PSF and p54{sup nrb} in biochemical nuclear matrix preparations. In cells transfected with an alternatively spliced preprotachykinin (PPT) minigene expression of hnRNP M promoted exon skipping while expression of PSF favours exon inclusion. The latter effect was reverted specifically by co-expressing the full length hnRNP M or a deletion mutant capable of interaction with PSF and p54{sup nrb}. Together our data provide new insights and some functional implications on the hnRNP M network of interactions.

PSF3 (partner of Sld five 3) is a member of the tetrameric complex termed GINS, composed of SLD5, PSF1, PSF2, and PSF3, and well-conserved evolutionarily. Previous studies suggested that some GINS complex members are upregulated in cancer, but PSF3 expression in colon carcinoma has not been investigated. Here, we established a mouse anti-PSF3 antibody, and examined PSF3 expression in human colon carcinoma cell lines and colon carcinoma specimens. We found that PSF3 is expressed in the crypt region in normal colonic mucosa and that many PSF3-positive cells co-expressed Ki-67. This suggests that PSF3-positivity of normal mucosa is associated with cell proliferation. Expression of the PSF3 protein was greater in carcinoma compared with the adjacent normal mucosa, and even stronger in high-grade malignancies, suggesting that it may be associated with colon cancer progression. PSF3 gene knock-down in human colon carcinoma cell lines resulted in growth inhibition characterized by delayed S-phase progression. These results suggest that PSF3 is a potential biomarker for diagnosis of progression in colon cancer and could be a new target for cancer therapy.

The objective of this project funded by ONR (grant # N00014-06-1-0374) was to measure, understand and be able to predict the propagation of light through the air-sea interface under various sea states...

The metallurgical irradiation experiment at the Oak Ridge Research Reactor Poolside Facility (ORR-PSF) was designed as a benchmark to test the accuracy of radiation embrittlement predictions in the pressure vessel wall of light water reactors on the basis of results from surveillance capsules. The PSF metallurgical Blind Test is concerned with the simulated surveillance capsule (SSC) and the simulated pressure vessel capsule (SPVC). The data from the ORR-PSF benchmark experiment are the basis for comparison with the predictions made by participants of the metallurgical ''Blind Test''. The Blind Test required the participants to predict the embrittlement of the irradiated specimen based only on dosimetry and metallurgical data from the SSC1 capsule. This exercise included both the prediction of damage fluence and the prediction of embrittlement based on the predicted fluence. A variety of prediction methodologies was used by the participants. No glaring biases or other deficiencies were found, but neither were any of the methods clearly superior to the others. Closer analysis shows a rather complex and poorly understood relation between fluence and material damage. Many prediction formulas can give an adequate approximation, but further improvement of the prediction methodology is unlikely at this time given the many unknown factors. Instead, attention should be focused on determining realistic uncertainties for the predicted material changes. The Blind Test comparisons provide some clues for the size of these uncertainties. In particular, higher uncertainties must be assigned to materials whose chemical composition lies outside the data set for which the prediction formula was obtained. 16 references, 14 figures, 5 tables

The purpose of this work was to develop methods to measure the presampled two-dimensional modulation transfer function (2D MTF) of digital imaging systems. A custom x-ray 'point source' phantom was created by machining 256 holes with diameter 0.107 mm through a 0.5-mm-thick copper plate. The phantom was imaged several times, resulting in many images of individual x-ray 'spots'. The center of each spot (with respect to the pixel matrix) was determined to subpixel accuracy by fitting each spot to a 2D Gaussian function. The subpixel spot center locations were used to create a 5x oversampled system pointspreadfunction (PSF), which characterizes the optical and electrical properties of the system and is independent of the pixel sampling of the original image. The modulus of the Fourier transform of the PSF was calculated. Next, the Fourier function was normalized to the zero frequency value. Finally, the Fourier transform function was divided by the first-order Bessel function that defined the frequency content of the holes, resulting in the presampled 2D MTF. The presampled 2D MTF of a 0.1 mm pixel pitch computed radiography system and 0.2 mm pixel pitch flat panel digital imaging system that utilized a cesium iodide scintillator was measured. Comparison of the axial components of the 2D MTF to one-dimensional MTF measurements acquired using an edge device method demonstrated that the two methods produced consistent results

The increasing prevalence of antibiotic-resistant Shigella sp. emphasizes that alternatives to conventional antibiotics are needed. Siphoviridae bacteriophage (phage), pSf-2, infecting S. flexneri ATCC(®) 12022 was isolated from Geolpocheon stream in Korea. Morphological analysis by transmission electron microscopy revealed that pSf-2 has a head of about 57 ± 4 nm in diameter with a long tail of 136 ± 3 nm in length and 15 ± 2 nm in width. One-step growth analysis revealed that pSf-2 has latent period of 30 min and burst size of 16 PFU/infected cell. The DNA genome of pSf-2 is composed of 50,109 bp with a G+C content of 45.44 %. The genome encodes 83 putative ORFs, 19 putative promoters, and 23 transcriptional terminator regions. Genome sequence analysis of pSf-2 and comparative analysis with the homologous T1-like Shigella phages, Shfl1 and pSf-1, revealed that pSf-2 is a novel T1-like Shigella phage. These results showed that pSf-2 might have a high potential as a biocontrol agent to control shigellosis. Also, the genomic information may lead to further understanding of phage biodiversity, especially T1-like phages.

Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)

Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)

Recent studies showed that the active piezoelectric structural fiber (PSF) composites may achieve significant and simultaneous improvements in sensing/actuating, stiffness, fracture toughness and vibration damping. These characteristics can be very important in the application of civil, mechanical and aerospace structures. The PSF is fabricated by coating the piezoceramic onto the silicon carbide core fiber with electrophoretic deposition (EPD) process to overcome the fragile nature of the monolithic piezoelectric materials. The PSF composite laminates are made of longitudinally poled PSFs that are unidirectionally deployed in the polymer binding matrix. The PSF laminate transducer has electrical inputs/outputs that are delivered through a separate etched interdigital electrode layer. This study analyzed the electromechanical properties with the generalized dilute scheme for active PSF composite laminate by considering multiinclusions. The well-known Mori-Tanaka approach was used to evaluate the concentration tensor in the multi-inclusion micromechanics model. To accurately predict the transverse properties, the extended role of mixtures were applied by considering the inclusions' geometry and shape. The micromechanical finite element modeling was also conducted with representative volume element (RVE) to compare with the micromechanics analysis on the electromechanical properties. The micromechanics analysis and finite element micromechanical modeling were conducted with varied fiber geometry dimensions and volume fractions. These comparison studies indicate the combined micromechanics models with the generalized dilute scheme can effectively predict the electro-elastic properties of multi-inclusion PSF composites.

Aim:The use of PSF-based 3D reconstruction algorithms (PSF) is desirable in most clinical PET-exams due to their superior image quality. Rb-82 cardiac PET is inherently noisy due to short half-life and prompt gammas and would presumably benefit from PSF. However, the quantitative behavior of PSF...... images, filtered backprojection (FBP). Furthermore, since myocardial segmentation might be affected by image quality, two different approaches to segmentation implemented in standard software (Carimas (Turku PET Centre) and QPET (Cedar Sinai)) are utilized. Method:14 dynamic rest-stress Rb-82 patient...

Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [{sup 11}C]-ASO and fluorine-18 labelled fluoro-l-thymidine [{sup 18}F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

This master thesis examines management support and safety culture in order to create a suggestion for a new PSF in Petro-HRA. Building on SPAR-H, the work processes PSF is examined and two select factors, management support and safety culture, are chosen for review. A literature review uncovers five factors for management support and six factors for safety culture. The findings of the review are discussed in addition to the discussion and selection of four factors that will come to make up...

The spatial resolution of the Siemens High Resolution Research Tomograph (HRRT) dedicated brain PET scanner installed at Copenhagen University Hospital (Rigshospitalet) was measured using a point-source phantom with high statistics. Further, it was demonstrated how the newly developed 3D-OSEM PSF...

Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

A methodology is described to evaluate the dosimetry and metallurgical data from the two-year ORR-PSF metallurgical irradiation experiment. The first step is to obtain a three-dimensional map of damage exposure parameter values based on neutron transport calculations and dosimetry measurements which are obtained by means of the LSL-M2 adjustment procedure. Metallurgical test data are then combined with damage parameter, temperature, and chemistry information to determine the correlation between radiation and steel embrittlement in reactor pressure vessels including estimates for the uncertainties. Statistical procedures for the evaluation of Charpy data, developed earlier, are used for this investigation. The data obtained in this investigation provide a benchmark against which the predictions of the PSF Blind Test can be compared. The results of this investigation and the Blind Test comparison are discussed

The reactor safety R and D work of the Karlsruhe Research Centre (FZKA) has been part of the Nuclear Safety Research Projet (PSF) since 1990. The present annual report 1994 summarizes the R and D results. The research tasks are coordinated in agreement with internal and external working groups. The contributions to this report correspond to the status of early 1995. An abstract in English precedes each of them, whenever the respective article is written in German. (orig.) [de

Polysulfone (Psf) composite membrane consist of activated carbon, polyethyleneimine and silver nitrate was prepared by phase inversion. The activated carbon (AC) act as adsorbent to adsorb heavy metal present in synthetic waste water while polysulfone membrane act as support. Phase inversion was carried out on different composition of activated carbon from 0 to 0.9% while other component are remain constant. The surface morphology of composite membrane was characterized by scanning electron m...

We show numerical simulations with monochromatic light in the visible for the LBTI Fizeau imager, including opto-dynamical aberrations due here to adaptive optics (AO) errors and to differential piston fluctuations, while other errors have been neglected. The achievable Strehl by the LBTI using two AO is close to the Strehl provided by a single standalone AO system, as long as other differential wavefront errors are mitigated. The LBTI Fizeau imager is primarily limited by the AO performance and by the differential piston/tip-tilt errors. Snapshots retain high-angular resolution and high-contrast imaging information by freezing the fringes against piston errors. Several merit functions have been critically evaluated in order to characterize pointspreadfunctions and the modulation transfer functions for high-contrast imaging applications. The LBTI Fizeau mode can provide an image quality suitable for standard science cases (i.e. a Strehl above 70 per cent) by performing both at a time: an AO correction better than ≈λ/18 RMS for both short and long exposures, and a piston correction better than ≈λ/8 RMS for long exposures or simply below the coherence length for short exposures. Such results, which can be applied to any observing wavelength, suggest that AO and piston control at the LBTI would already improve the contrast at near- and mid-infrared wavelengths. Therefore, the LBTI Fizeau imager can be used for high-contrast imaging, providing a high-Strehl regime (by both AO systems), a cophasing mode (by a fringe tracker) and a burst mode (by a fast camera) to record fringed speckles in short exposures.

An attempt has been made to investigate the nanofillers incorporated polysulfone (PSF) and polyvinylpyrrolidone (PVP) polymer membranes prepared by phase inversion method. Initially, the nanofillers, viz, Zinc Oxide (ZnO) nanoparticle, Graphene Oxide-Zinc Oxide (GO-ZnO) nanocomposite were synthesized and then directly incorporated into PSF/PVP blend during the preparation of membranes. The prepared membranes have been subjected to FE-SEM, AFM, BET, contact angle, tensile test and anti-bacterial studies. Significant membrane morphologies and nanoporous properties have been observed by FE-SEM and BET, respectively. It has been observed that hydrophilicity, mechanical strength and water permeability of the ZnO and GO-ZnO incorporated membranes were enhanced than bare membrane. Antibacterial activity was assessed by measuring the inhibition zones formed around the membrane by disc-diffusion method using Escherichia coli (gram-negative) as a model bacterium. Again, it has been observed that nanofillers incorporated membrane exhibits high antibacterial performance compared to bare membrane.

Full Text Available Introduction The aim was to confirm that PSF (probability of stone formation changed appropriately following medical therapy on recurrent stone formers. Materials and Methods Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment. Results At baseline, 20 of the 26 patients (77% had a high PSF score (> 0.5. Of the 26 patients, 17 (65% showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42% changed from a high risk (PSF > 0.5 to a low risk (PSF 0.5 during both assessments. Conclusions The PSF score reduced following medical treatment in the majority of patients in this cohort.

Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and pointspreadfunction (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

Nov 27, 2015 ... Several new paradigms for super-resolution in optical systems use 'a posteriori' digital image processing. In these ventures the three-dimensional pointspreadfunction (PSF) of the lens plays a key role in image acquisition. A straightforward tailoring of the PSF can be performed by appropriate pupil plane ...

obtaining the pointspreadfunction (PSF parameter, iterative wiener filter is adopted to complete the restoration. We experimentally illustrate its performance on simulated data and real blurred image. Results show that the proposed PSF parameter estimation technique and the image restoration method are effective.

Several new paradigms for super-resolution in optical systems use 'a posteriori' digital image processing. In these ventures the three-dimensional pointspreadfunction (PSF) of the lens plays a key role in image acquisition. A straightforward tailoring of the PSF can be performed by appropriate pupil plane filtering. With a ...

PSF1 (Partner of SLD Five 1) is an evolutionarily conserved DNA replication factor that is part of the GINS (Go, Ichi, Nii, and San) complex . The objective of this study was to evaluate the relationship between PSF1 expression and prognosis in patients with non-small cell lung cancer (NSCLC) treated with surgery following preoperative chemotherapy or chemoradiotherapy. Sixty-nine patients with NSCLC treated with surgery following preoperative chemotherapy or chemoradiotherapy who did not achieve pathologic complete response were enrolled. The status of PSF1 expression was evaluated by immunohistochemistry, and the relationship between expression of PSF1 and Ki-67 was determined, as well as correlations between PSF1 expression and prognosis. We found that 27 of 69 patients' tumors (39 %) were positive for PSF1 expression. The Ki-67 index was significantly higher in the PSF1-positive versus the PSF1-negative group (p = 0.0026). Five-year, disease-free survival of the PSF1-positive group was significantly worse (17.7 vs. 44.3 %, p = 0.0088), and the 5-year overall survival also was worse (16.6 vs. 47.2 %, p = 0.0059). Moreover, PSF1 expression was found to be a significant independent prognostic factor for shorter survival by Cox multivariate analysis (hazard ratio 2.43, 95 % confidence interval 1.27-4.60, p = 0.0076). PSF1 is a useful prognostic biomarker to stratify NSCLC patients treated with surgery following preoperative chemotherapy or chemoradiotherapy.

The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.

Full Text Available Nanocomposite membranes composed of polymer and inorganic nanoparticles are a novel method to enhance gas separation performance. In this study, membranes were fabricated from polysulfone (PSf containing magnesium oxide (MgO nanoparticles and gas permeation properties of the resulting membranes were investigated. Membranes were prepared by solution blending and phase inversion methods. Morphology of the membranes, void formations, MgO distribution and aggregates were observed by SEM analysis. Furthermore, thermal stability, residual solvent in the membrane film and structural ruination of membranes were analyzed by thermal gravimetric analysis (TGA. The effects of MgO nanoparticles on the glass transition temperature (Tg of the prepared nanocomposites were studied by differential scanning calorimetry (DSC. The Tg of nanocomposite membranes increased with MgO loading. Fourier transform infrared (FTIR spectra of nanocomposite membranes were analyzed to identify the variations of the bonds. The results obtained from gas permeation experiments with a constant pressure setup showed that adding MgO nanoparticles to the polymeric membrane structure increased the permeability of the membranes. At 30 wt% MgO loading, the CO2 permeability was enhanced from 25.75×10-16 to 47.12×10-16 mol.m/(m².s.Pa and the CO2/CH4 selectivity decreased from 30.84 to 25.65 when compared with pure PSf. For H2, the permeability was enhanced from 44.05×10-16 to 67.3×10-16 mol.m/(m².s.Pa, whereas the H2/N2 selectivity decreased from 47.11 to 33.58.

Purpose: Measuring and incorporating a scanner specific pointspreadfunction (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However due to the short half life of clinically used isotopes other long lived isotopes not used in clinical practice are used to perform the PSF measurements. As such non optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction usuall...

Full Text Available Polysulfone (Psf composite membrane consist of activated carbon, polyethyleneimine and silver nitrate was prepared by phase inversion. The activated carbon (AC act as adsorbent to adsorb heavy metal present in synthetic waste water while polysulfone membrane act as support. Phase inversion was carried out on different composition of activated carbon from 0 to 0.9% while other component are remain constant. The surface morphology of composite membrane was characterized by scanning electron microscopy (SEM while heavy metal absorption was quantified by atomic absorption spectrometer (AAS. The SEM image show symmetric membrane matrix with sponge structure. The composite membrane with 0.9wt% AC has the highest water flux as well as removal of heavy metal (chromium, lead, silver and cadmium compare to composite membrane with 0.3wt% AC and 0.5wt% AC. The percentage of heavy metal reduction by composite membrane 0.9wt% AC was 35% cadmium, 19% chromium, 16% silver and 2% lead. The result indicated that the introduction of 0.9wt% AC indeed plays an important role towards enhancing the adsorption of heavy metal in water.

The main challenge working with underwater imagery results from both rapid decay of signals due to absorption, which leads to poor signal to noise returns, and the blurring caused by strong scattering by the water itself and constituents within, especially particulates. The modulation transfer function (MTF) of an optical system gives the detailed and precise information regarding the system behavior. Underwater imageries can be better restored with the knowledge of the system MTF or the pointspreadfunction (PSF), the Fourier transformed equivalent, extending the performance range as well as the information retrieval from underwater electro-optical system. This is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. This effort utilizes test imageries obtained by the Laser Underwater Camera Imaging Enhancer (LUCIE) from Defense Research and Development Canada (DRDC), during an April-May 2006 trial experiment in Panama City, Florida. Imaging of a standard resolution chart with various spatial frequencies were taken underwater in a controlled optical environment, at varying distances. In-water optical properties during the experiment were measured, which included the absorption and attenuation coefficients, particle size distribution, and volume scattering function. Resulting images were preprocessed to enhance signal to noise ratio by averaging multiple frames, and to remove uneven illumination at target plane. The MTF of the medium was then derived from measurement of above imageries, subtracting the effect of the camera system. PSFs converted from the measured MTF were then used to restore the blurred imageries by different deconvolution methods. The effects of polarization from source to receiver on resulting MTFs were examined and we demonstrate that matching polarizations do enhance system transfer functions. This approach also shows promise in deriving medium optical

Full Text Available O presente estudo tem como objetivos identificar o perfil dos médicos que atuam ou atuaram no PSF, suas principais dificuldades e levantar a porcentagem de equipes de saúde da família sem médico no município de São Paulo. Para isso, foi utilizado um questionário baseado nas principais falas do estudo de Capozzolo, coletadas de janeiro até maio de 2008, e dados da atenção básica de outubro até dezembro de 2007. Os principais resultados incluem um tempo menor que cinco anos de formação para a maioria dos entrevistados e afinidade pelo PSF como motivação para o trabalho. As principais dificuldades referem-se à alta demanda, alta incidência de casos complexos, dificuldade de referenciamento, perfil de divisão do tempo não condizente com as necessidades de saúde e falta de incentivo à especialização. Os dados da atenção básica demonstraram que a Coordenadoria Leste foi a que mais sofreu falta de médicos no período analisado, mantendo índices em torno de 20% e 40%; existência de um aumento no déficit com a aproximação do final do ano e a manutenção dos déficits em algumas unidades.This study aims to identify the profile of doctors who act or acted in PSF, its main difficulties and raise the percentage of teams of family health without doctor in the city of São Paulo. For this was used a questionnaire based on keywords of the study of Capozzolo collected from January to May 2008, and data of the Primary Care from October until December 2007. The main results include a time less than 5 years of training for most of the interviewees and affinity by the PSF as motivation for work. Some of the main difficulties are the high demand, high incidence of complex cases, difficulty of listings, profile division of time is not consistent with health needs and lack of incentive to specialization. The figures for Primary Care demonstrated that the coordination East had the highest absence of experienced doctors in the period

A small proportion of the Palm Oil Mill Effluent (POME) treatment has used its wastewater to converted to methane gas which will then be converted again into electrical energy. However, for Palm Oil Mill whose has a value of Chemical Oxygen Demand in its wastewater is less than 60.000 mg / L this can’t so that the purpose wastewater treatment only to reach the standard that can be safe to dispose into the environment. Wastewater treatment systems that are general applied by Palm Oil Mill especially in North Sumatera are aerobic and anaerobic, this method takes a relatively long time due to very dependent on microbial activity. An alternative method for wastewater treatment offered is membrane technology because the process is much more effective, the time is relatively short, and expected to give more optimal result. The optimum membrane obtained is PSF19%DMFEVA2T75 membrane,while the parameter condition of the permeate analysis produced in the treatment of POME wastewater with membrane PSF19%DMFEVA2T75 obtained at pH = 7.0; TSS = 148 mg / L; BOD = 149 mg / L; And COD = 252 mg / L. The results obtained is accordance with the standard of the quality of POME.

In recent years the low energy behavior of the Photon Strength Function (PSF) has attracted much attention. A completely consistent description of this behavior is not available. The neutron capture γ-ray spectra measured by the DANCE detector array located at the Los Alamos Neutron Science Center has been used for the study of the PSF below the neutron separation energy. The radiative decay of the compound nuclei ^156, 157, 159Gd has been measured. The spectra were simulated with the DICEBOX code. A variety of phenomenological models of the PSF were considered in the simulations. Comparison of the experimental and simulated spectra will be presented.

The design considerations and operational features of DOPHOT, a point-spreadfunction (PSF) fitting photometry program, are described. Some relevant details of the PSF fitting are discussed. The quality of the photometry returned by DOPHOT is assessed via reductions of an 'artificial' globular cluster generated from a list of stars with known magnitudes and colors. Results from comparative tests between DOPHOT and DAOPHOT using this synthetic cluster and real data are also described.

Conoscopic holography is a method for recording holograms with incoherent light, first presented in 1985. Its applications range from 3D microscopy to 3D satellite imaging and include robotics. The PointSpreadFunction (PSF) is a Gabor Zone Pattern, which is known to have zeros in Fourier space. We present an experimental technique to obtain an invertible PSF with an experimental image reconstruction, and an original algorithm to find the object shape, validated with both simulations and first experimental results.

Full Text Available Objective. This study evaluated variation in functional independence in activities of daily living (ADL and instrumental activities of daily living (IADL among individuals with poststroke fatigue (PSF and poststroke depression (PSD. Methods. A cross-sectional survey involved 65 consenting poststroke survivors who were purposively recruited from physiotherapy clinics of the University College Hospital, Ibadan, Adeoyo Maternity Teaching Hospital, Ibadan, and Federal Medical Center, Gusau. Participants were assessed for symptoms of PSD with short geriatric depression scale-15, PSF with fatigue severity scale, ADL with Barthel Index and IADL with Nottingham extended ADL scale. Data analysis was done using Chi-square and unpaired t-test with significance level being 0.05. Results. Participants’ age ranged from 58 to 80 years. PSD alone (P=0.002 and both PSF and PSD (P=0.02 were significantly associated with ADL, while PSF alone was not (P=0.233. PSD alone (P=0.001 and both PSF and PSD (P=0.001 significantly negatively affected IADL, while PSF alone had no significant effect (P=0.2. Conclusions. Participants with PSD alone and those with both PSF and PSD had lower functional independence in ADL and IADL.

Objective. This study evaluated variation in functional independence in activities of daily living (ADL) and instrumental activities of daily living (IADL) among individuals with poststroke fatigue (PSF) and poststroke depression (PSD). Methods. A cross-sectional survey involved 65 consenting poststroke survivors who were purposively recruited from physiotherapy clinics of the University College Hospital, Ibadan, Adeoyo Maternity Teaching Hospital, Ibadan, and Federal Medical Center, Gusau. Participants were assessed for symptoms of PSD with short geriatric depression scale-15, PSF with fatigue severity scale, ADL with Barthel Index and IADL with Nottingham extended ADL scale. Data analysis was done using Chi-square and unpaired t-test with significance level being 0.05. Results. Participants' age ranged from 58 to 80 years. PSD alone (P = 0.002) and both PSF and PSD (P = 0.02) were significantly associated with ADL, while PSF alone was not (P = 0.233). PSD alone (P = 0.001) and both PSF and PSD (P = 0.001) significantly negatively affected IADL, while PSF alone had no significant effect (P = 0.2). Conclusions. Participants with PSD alone and those with both PSF and PSD had lower functional independence in ADL and IADL.

The Lucy-Richardson super resolution image processing technique, combined with the introduced virtual pointspreadfunction (PSF), was used to develop a measurement method of the processing precision of the superfine thick pinhole aperture. The principles of the technique were based on the known ideal image and degraded image. After the restoration and reconstruction of the degraded image with the introduced virtual pointspreadfunction (PSF), the comparison is made between the reconstructed image and the ideal image to judge the correctness of the virtual pointspreadfunction (PSF). During this process, the simulation of the effects of the pointspreadfunction (PSF) upon the image reconstruction was carried out at first. As indicated by the simulation, the ideal pointspreadfunction (PSF) used in the image restoration and reconstruction could provide ideal results of the image reconstruction. However, in the case of relatively bigger size of the pointspreadfunction (PSF), the reconstructed image would be obtained smaller than the ideal image. Besides, related experiments were carried out on the cobalt radiation sources. In the experiments, the aperture of the shielded collimator to restrict and align the radiation source was known to be l.0 mm, the thick pinholes respectively 0.7 mm and 0.45 mm in aperture were used for the imaging of the ϕl mm radiation source, and the radiation image was recorded in imaging plates 0.05 mm × O.05 mm in spatial resolution. Based on the hypothesis that the processing precision of the thick pinhole fulfill the experiment requirements, the pointspreadfunction obtained from the simulated computation was introduced into the restoration and reconstruction of the recorded images. At the area with an intensity of 50%, the thick pinhole with 0.7 mm aperture could provide homogenous image of the radiation source. However, the thick pinhole with 0.45 mm aperture provided an elliptical image with a major-minor axis ratio of 5 : 3

Monodisperse surface-charged submicron polystyrene particles were designed, synthesized, and blended into polysulfone (PSF) support layer to prepare forward osmosis (FO) membrane with high performance. The membrane incorporated with particles were characterized with respect to morphology, porosity, and internal osmotic pressure (IOP). Results showed that the polymer particles not only increased the hydrophilicity and porosity of support layer, but also generated considerable IOP, which helped markedly decreasing the structure parameter from 1550 to 670 μm. The measured mass transfer parameters further confirmed the beneficial effects of the surface-charged submicron polymer particles on the performance of FO membrane. For instance, the water permeability coefficient (5.37 L m-2 h-1 bar-1) and water flux (49.7 L m-2 h-1) of the FO membrane incorporated with 5 wt% particles were almost twice as much as that of FO membrane without incorporation. This study suggests that monodisperse surface-charged submicron polymer particles are potential modifiers for improving the performance of FO membranes.

Full Text Available With the rapid development of smart cities in the world, research relating to smart city evaluation has become a new research hotspot in academia. However, there are general problems of cognitive deprivation, lack of planning experience, and low level of coordination in smart cities construction. It is necessary for us to develop a set of scientific, reasonable, and effective evaluation index systems and evaluation models to analyze the development degree of urban wisdom. Based on the theory of the urban system, we established a comprehensive evaluation index system for urban intelligent development based on the people-oriented, city-system, and resources-flow (PSF evaluation model. According to the characteristics of the comprehensive evaluation index system of urban intelligent development, the analytic hierarchy process (AHP combined with the experts’ opinions determine the index weight of this system. We adopted the neural network model to construct the corresponding comprehensive evaluation model to characterize the non-linear characteristics of the comprehensive evaluation indexes system, thus to quantitatively quantify the comprehensive evaluation indexes of urban intelligent development. Finally, we used the AHP, AHP-BP (Back Propagation, and AHP-ELM (Extreme Learning Machine models to evaluate the intelligent development level of 151 cities in China, and compared them from the perspective of model accuracy and time cost. The final simulation results show that the AHP-ELM model is the best evaluation model.

Purpose: Measuring and incorporating a scanner-specific pointspreadfunction (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to

especially as spatial resolution improves. Software based image fusion remains a complex issue outside the brain. State of the art image quality in a modern PET/CT system includes incorporation of pointspreadfunction (PSF) and time-of-flight (TOF) information into the reconstruction leading to the high...

Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the pointspreadfunction (PSF) limits the accuracy of this measurement procedure. We propose

Both the axial and lateral pointspreadfunctions (PSF) and the corresponding irradiances are computed in both cases of conventional and confocal scanning microscopes for the above-mentioned amplitude filters. These results of axial and lateral irradiances are graphically represented by constructing a computer program ...

In row 3, a brain image (AIDS dementia disease) is considered as a standard image. The template image of size 45 × 45 pixel is extracted from the standard brain image and convolved with the PointSpreadFunction (PSF) of h = 8 × 8 window size. Then the template image is matched with standard brain image. Further, row ...

A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the PointSpreadFunction (PSF) of the used optics, the integration time of

Lateral charge diffusion in back-illuminated CCDs directly affects the pointspreadfunction (PSF) and spatial resolution of an imaging device. This can be of particular concern in thick, back-illuminated CCDs. We describe a technique of measuring this diffusion and present PSF measurements for an 800 x 1100, 15 mu m pixel, 280 mu m thick, back-illuminated, p-channel CCD that can be over-depleted. The PSF is measured over a wavelength range of 450 nm to 650 nm and at substrate bias voltages between 6 V and 80 V.

.... These inherent optical properties (IOP), although measured frequently due to their important applications in ocean optics, especially in remote sensing, cannot be applied to underwater imaging issues directly, since they inherently reflect the chance of the single scattering.

Time-of-flight (TOF) and pointspreadfunction (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF + PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic. Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF + PSF. These findings suggest a large potential benefit of TOF + PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients.

Time-of-flight (TOF) and pointspreadfunction (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF + PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic. Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF + PSF. These findings suggest a large potential benefit of TOF + PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients. (paper)

In the last few years, optical and near-IR IFU observations from the ground have revolutionized extragalactic astronomy. The unprecedented infrared sensitivity, spatial resolution, and spectral coverage of the JWST IFUs will ensure high demand from the community. For a wide range of extragalactic phenomena (e.g. quasars, starbursts, supernovae, gamma ray bursts, tidal disruption events) and beyond (e.g. nebulae, debris disks around bright stars), PSF contamination will be an issue when studying the underlying extended emission. We propose to provide the community with a PSF decomposition and spectral analysis package for high dynamic range JWST IFU observations allowing the user to create science-ready maps of relevant spectral features. Luminous quasars, with their bright central source (quasar) and extended emission (host galaxy), are excellent test cases for this software. Quasars are also of high scientific interest in their own right as they are widely considered to be the main driver in regulating massive galaxy growth. JWST will revolutionize our understanding of black hole-galaxy co-evolution by allowing us to probe the stellar, gas, and dust components of nearby and distant galaxies, spatially and spectrally. We propose to use the IFU capabilities of NIRSpec and MIRI to study the impact of three carefully selected luminous quasars on their hosts. Our program will provide (1) a scientific dataset of broad interest that will serve as a pathfinder for JWST science investigations in IFU mode and (2) a powerful new data analysis tool that will enable frontier science for a wide swath of astrophysical research.

To achieve the goals of the Large Synoptic Survey Telescope for Dark Energy science requires a detailed understanding of CCD sensor effects. One such sensor effect is the PointSpreadFunction (PSF) increasing with flux, alternatively called the `Brighter-Fatter Effect.' In this work a novel approach was tested to perform the PSF measurements in the context of the Brighter-Fatter Effect employing a Michelson interferometer to project a sinusoidal fringe pattern onto the CCD. The Brighter-Fatter effect predicts that the fringe pattern should become asymmetric in the intensity pattern as the brighter peaks corresponding to a larger flux are smeared by a larger PSF. By fitting the data with a model that allows for a changing PSF, the strength of the Brighter-Fatter effect can be evaluated.

The resolution of super-resolution microscopy based on single molecule localization is in part determined by the accuracy of the localization algorithm. In most published approaches to date this localization is done by fitting an analytical function that approximates the pointspreadfunction (PSF) of the microscope. However, particularly for localization in 3D, analytical functions such as a Gaussian, which are computationally inexpensive, may not accurately capture the PSF shape leading to reduced fitting accuracy. On the other hand, analytical functions that can accurately capture the PSF shape, such as those based on pupil functions, can be computationally expensive. Here we investigate the use of cubic splines as an alternative fitting approach. We demonstrate that cubic splines can capture the shape of any PSF with high accuracy and that they can be used for fitting the PSF with only a 2-3x increase in computation time as compared to Gaussian fitting. We provide an open-source software package that measures the PSF of any microscope and uses the measured PSF to perform 3D single molecule localization microscopy analysis with reasonable accuracy and speed.

The AGILE scientific instrument has been calibrated with a tagged γ-ray beam at the Beam Test Facility (BTF) of the INFN Laboratori Nazionali di Frascati (LNF). The goal of the calibration was the measure of the PointSpreadFunction (PSF) as a function of the photon energy and incident angle and the validation of the Monte Carlo (MC) simulation of the silicon tracker operation. The calibration setup is described and some preliminary results are presented.

In recent years three-dimensional (3D) super-resolution fluorescence imaging by single-molecule localization (localization microscopy) has gained considerable interest because of its simple implementation and high optical resolution. Astigmatic and biplane imaging are experimentally simple methods to engineer a 3D-specific pointspreadfunction (PSF), but existing evaluation methods have proven problematic in practical application. Here we introduce the use of cubic B-splines to model the relationship of axial position and PSF width in the above mentioned approaches and compare the performance with existing methods. We show that cubic B-splines are the first method that can combine precision, accuracy and simplicity.

This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian pointspreadfunction (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

The applicability of sum-coincidence measurements of two-step cascade gamma ray spectra determining Photon Strength Function (PSF) of Hf-181 induced from Hf-180 (n,2 gamma) Hf-181 reaction is presented. Up to 80% intensity of the primary gamma ray transitions in a wide energy range have been deduced and compared to model calculation.

Due to the central obscuration problem exists in most optical synthetic aperture systems, it is necessary to analyze its effects on their image performance. Based on the incoherent diffraction limited imaging theory, a Golay-3 type synthetic aperture system was used to study the central obscuration effects on the pointspreadfunction (PSF) and the modulation transfer function (MTF). It was found that the central obscuration does not affect the width of the central peak of the PSF and the cutoff spatial frequency of the MTF, but attenuate the first sidelobe of the PSF and the midfrequency of the MTF. The imaging simulation of a Golay-3 type synthetic aperture system with central obscuration proved this conclusion. At last, a Wiener Filter restoration algorithm was used to restore the image of this system, the images were obviously better.

We prospectively evaluated whether a strategy using pointspreadfunction (PSF) reconstruction for both diagnostic and quantitative analysis in non-small cell lung cancer (NSCLC) patients meets the European Association of Nuclear Medicine (EANM) guidelines for harmonization of quantitative values. The NEMA NU-2 phantom was used to determine the optimal filter to apply to PSF-reconstructed images in order to obtain recovery coefficients (RCs) fulfilling the EANM guidelines for tumour positron emission tomography (PET) imaging (PSF{sub EANM}). PET data of 52 consecutive NSCLC patients were reconstructed with unfiltered PSF reconstruction (PSF{sub allpass}), PSF{sub EANM} and with a conventional ordered subset expectation maximization (OSEM) algorithm known to meet EANM guidelines. To mimic a situation in which a patient would undergo pre- and post-therapy PET scans on different generation PET systems, standardized uptake values (SUVs) for OSEM reconstruction were compared to SUVs for PSF{sub EANM} and PSF{sub allpass} reconstruction. Overall, in 195 lesions, Bland-Altman analysis demonstrated that the mean ratio between PSF{sub EANM} and OSEM data was 1.03 [95 % confidence interval (CI) 0.94-1.12] and 1.02 (95 % CI 0.90-1.14) for SUV{sub max} and SUV{sub mean}, respectively. No difference was noticed when analysing lesions based on their size and location or on patient body habitus and image noise. Ten patients (84 lesions) underwent two PET scans for response monitoring. Using the European Organization for Research and Treatment of Cancer (EORTC) criteria, there was an almost perfect agreement between OSEM{sub PET1}/OSEM{sub PET2} (current standard) and OSEM{sub PET1}/PSF{sub EANM-PET2} or PSF{sub EANM-PET1}/OSEM{sub PET2} with kappa values of 0.95 (95 % CI 0.91-1.00) and 0.99 (95 % CI 0.96-1.00), respectively. The use of PSF{sub allpass} either for pre- or post-treatment (i.e. OSEM{sub PET1}/PSF{sub allpass-PET2} or PSF{sub allpass-PET1}/OSEM{sub PET2}) showed

Theory and empirical evidence suggest that plant-soil feedback (PSF) determines the structure of a plant community and nutrient cycling in terrestrial ecosystems. The plant community alters the nutrient pool size in soil by affecting litter decomposition processes, which in turn shapes the plant community, forming a PSF system. However, the role of microbial decomposers in PSFfunction is often overlooked, and it remains unclear whether decomposers reinforce or weaken litter-mediated plant control over nutrient cycling. Here, we present a theoretical model incorporating the functional diversity of both plants and microbial decomposers. Two fundamental microbial processes are included that control nutrient mineralization from plant litter: (i) assimilation of mineralized nutrient into the microbial biomass (microbial immobilization), and (ii) release of the microbial nutrients into the inorganic nutrient pool (net mineralization). With this model, we show that microbial diversity may act as a buffer that weakens plant control over the soil nutrient pool, reversing the sign of PSF from positive to negative and facilitating plant coexistence. This is explained by the decoupling of litter decomposability and nutrient pool size arising from a flexible change in the microbial community composition and decomposition processes in response to variations in plant litter decomposability. Our results suggest that the microbial community plays a central role in PSFfunction and the plant community structure. Furthermore, the results strongly imply that the plant-centered view of nutrient cycling should be changed to a plant-microbe-soil feedback system, by incorporating the community ecology of microbial decomposers and their functional diversity.

Purpose Ultrafast imaging techniques based on spatiotemporal-encoding (SPEN), such as RASER (rapid acquisition with sequential excitation and refocusing), is a promising new class of sequences since they are largely insensitive to magnetic field variations which cause signal loss and geometric distortion in EPI. So far, attempts to theoretically describe the point-spread-function (PSF) for the original SPEN-imaging techniques have yielded limited success. To fill this gap a novel definition for an apparent PSF is proposed. Theory Spatial resolution in SPEN-imaging is determined by the spatial phase dispersion imprinted on the acquired signal by a frequency-swept excitation or refocusing pulse. The resulting signal attenuation increases with larger distance from the vertex of the quadratic phase profile. Methods Bloch simulations and experiments were performed to validate theoretical derivations. Results The apparent PSF quantifies the fractional contribution of magnetization to a voxel’s signal as a function of distance to the voxel. In contrast, the conventional PSF represents the signal intensity at various locations. Conclusion The definition of the conventional PSF fails for SPEN-imaging since only the phase of isochromats, but not the amplitude of the signal varies. The concept of the apparent PSF is shown to be generalizable to conventional Fourier- imaging techniques. PMID:26712657

Measuring a weak gravitational lensing signal to the level required by the next generation of space-based surveys demands exquisite reconstruction of the point-spreadfunction (PSF). However, unresolved binary stars can significantly distort the PSF shape. In an effort to mitigate this bias, we aim at detecting unresolved binaries in realistic Euclid stellar populations. We tested methods in numerical experiments where (I) the PSF shape is known to Euclid requirements across the field of view; and (II) the PSF shape is unknown. We drew simulated catalogues of PSF shapes for this proof-of-concept paper. Following the Euclid survey plan, the objects were observed four times. We propose three methods to detect unresolved binary stars. The detection is based on the systematic and correlated biases between exposures of the same object. One method is a simple correlation analysis, while the two others use supervised machine-learning algorithms (random forest and artificial neural network). In both experiments, we demonstrate the ability of our methods to detect unresolved binary stars in simulated catalogues. The performance depends on the level of prior knowledge of the PSF shape and the shape measurement errors. Good detection performances are observed in both experiments. Full complexity, in terms of the images and the survey design, is not included, but key aspects of a more mature pipeline are discussed. Finding unresolved binaries in objects used for PSF reconstruction increases the quality of the PSF determination at arbitrary positions. We show, using different approaches, that we are able to detect at least binary stars that are most damaging for the PSF reconstruction process. The code corresponding to the algorithms used in this work and all scripts to reproduce the results are publicly available from a GitHub repository accessible via http://lastro.epfl.ch/software

In many image restoration/resolution enhancement applications, the blurring process, i.e., pointspreadfunction (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

Synthetic transmit aperture (STA) imaging gives the possibility to acquire an image with only few emissions and is appealing for 3D ultrasound imaging. Even though the number of emissions is low, the change in position of the scatterers prohibits the coherent summations of ultrasound echoes and l...... resolution image as a sum of rotated PSFs of a single LRI. The approximation is validated with a Field II simulation. The model predicts and explains the motion artifacts, and gives an intuitive feeling of what would happen for different velocities....... is used to develop an approximation of the pointspreadfunction (PSF) of a LRI. It is shown that the PSF of LRIs obtained by transmitting with different elements can be viewed as rotated versions of each other. Summing several LRIs gives a high resolution image. The model approximates the PSF of a high...

We study the possibility of using quadrupole moments of auto-convolved galaxy images to measure cosmic shear. The autoconvolution of an image corresponds to the inverse Fourier transformation of its power spectrum. The new method has the following advantages: the smearing effect due to the point-spreadfunction (PSF) can be corrected by subtracting the quadrupole moments of the auto-convolved PSF; the centroid of the auto-convolved image is trivially identified; the systematic error due to noise can be directly removed in Fourier space; the PSF image can also contain noise, the effect of which can be similarly removed. With a large ensemble of simulated galaxy images, we show that the new method can reach a sub-percent level accuracy under general conditions, albeit with increasingly large stamp size for galaxies of less compact profiles.

As part of the astrometric Hubble Space Telescope (HST) large program GO-12911, we conduct an in-depth study to characterize the pointspreadfunction (PSF) of the Uv-VISual channel of the Wide Field Camera 3 (WFC3), as a necessary step to achieve the astrometric goals of the program. We extracted a PSF from each of the 589 deep exposures taken through the F467M filter over the course of a year and find that the vast majority of the PSFs lie along a 1-D locus that stretches continuously from one side of focus, through optimal focus, to the other side of focus. We constructed a focus-diverse set of PSFs and find that with only five medium-bright stars in an exposure it is possible to pin down the focus level of that exposure. We show that the focus-optimized PSF does a considerably better job fitting stars than the average 'library' PSF, especially when the PSF is out of focus. The fluxes and positions are significantly improved over the 'library' PSF treatment. These results are beneficial for a much broader range of scientific applications than simply the program at hand, but the immediate use of these PSFs will enable us to search for astrometric wobble in the bright stars in the core of the globular cluster M 4, which would indicate a dark, high-mass companion, such as a white dwarf, neutron star or black hole.

The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument pointspreadfunction (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ 2 ∞ ∼2.51 times the squared PSF width σ 2 PSF39 . While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument pointspreadfunction (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

Quantification of tumour heterogeneity in PET images has recently gained interest, but has been shown to be dependent on image reconstruction. This study aimed to evaluate the impact of the EANM/EARL accreditation program on selected {sup 18}F-FDG heterogeneity metrics. To carry out our study, we prospectively analysed 71 tumours in 60 biopsy-proven lung cancer patient acquisitions reconstructed with unfiltered pointspreadfunction (PSF) positron emission tomography (PET) images (optimised for diagnostic purposes), PSF-reconstructed images with a 7-mm Gaussian filter (PSF{sub 7}) chosen to meet European Association of Nuclear Medicine (EANM) 1.0 harmonising standards, and EANM Research Ltd. (EARL)-compliant ordered subset expectation maximisation (OSEM) images. Delineation was performed with fuzzy locally adaptive Bayesian (FLAB) algorithm on PSF images and reported on PSF{sub 7} and OSEM ones, and with a 50 % standardised uptake values (SUV){sub max} threshold (SUV{sub max50%}) applied independently to each image. Robust and repeatable heterogeneity metrics including 1st-order [area under the curve of the cumulative histogram (CH{sub AUC})], 2nd-order (entropy, correlation, and dissimilarity), and 3rd-order [high-intensity larger area emphasis (HILAE) and zone percentage (ZP)] textural features (TF) were statistically compared. Volumes obtained with SUV{sub max50%} were significantly smaller than FLAB-derived ones, and were significantly smaller in PSF images compared to OSEM and PSF{sub 7} images. PSF-reconstructed images showed significantly higher SUVmax and SUVmean values, as well as heterogeneity for CH{sub AUC}, dissimilarity, correlation, and HILAE, and a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. Histological subtypes had no impact on TF distribution. No significant difference was observed between any of the considered metrics (SUV or heterogeneity features) that we

Pointspreadfunction (PSF) reconstruction improves spatial resolution throughout the entire field of view of a PET system and can detect smaller metastatic deposits than conventional algorithms such as OSEM. We assessed the impact of PSF reconstruction on quantitative values and diagnostic accuracy for axillary staging of breast cancer patients, compared with an OSEM reconstruction, with emphasis on the size of nodal metastases. This was a prospective study in a single referral centre in which 50 patients underwent an {sup 18}F-FDG PET examination before axillary lymph node dissection. PET data were reconstructed with an OSEM algorithm and PSF reconstruction, analysed blindly and validated by a pathologist who measured the largest nodal metastasis per axilla. This size was used to evaluate PET diagnostic performance. On pathology, 34 patients (68 %) had nodal involvement. Overall, the median size of the largest nodal metastasis per axilla was 7 mm (range 0.5 - 40 mm). PSF reconstruction detected more involved nodes than OSEM reconstruction (p = 0.003). The mean PSF to OSEM SUV{sub max} ratio was 1.66 (95 % CI 1.01 - 2.32). The sensitivities of PSF and OSEM reconstructions were, respectively, 96 % and 92 % in patients with a largest nodal metastasis of >7 mm, 60 % and 40 % in patients with a largest nodal metastasis of ≤7 mm, and 92 % and 69 % in patients with a primary tumour ≤30 mm. Biggerstaff graphical comparison showed that globally PSF reconstruction was superior to OSEM reconstruction. The median sizes of the largest nodal metastasis in patients with nodal involvement not detected by either PSF or OSEM reconstruction, detected by PSF but not by OSEM reconstruction and detected by both reconstructions were 3, 6 and 16 mm (p = 0.0064) respectively. In patients with nodal involvement detected by PSF reconstruction but not by OSEM reconstruction, the smallest detectable metastasis was 1.8 mm. As a result of better activity recovery, PET with PSF

.... This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target...

We present details of the construction and characterization of the coaddition of the Sloan Digital Sky Survey (SDSS) Stripe 82 ugriz imaging data. This survey consists of 275 deg 2 of repeated scanning by the SDSS camera over –50° ≤ α ≤ 60° and –1.°25 ≤ δ ≤ +1.°25 centered on the Celestial Equator. Each piece of sky has ∼20 runs contributing and thus reaches ∼2 mag fainter than the SDSS single pass data, i.e., to r ∼ 23.5 for galaxies. We discuss the image processing of the coaddition, the modeling of the point-spreadfunction (PSF), the calibration, and the production of standard SDSS catalogs. The data have an r-band median seeing of 1.''1 and are calibrated to ≤1%. Star color-color, number counts, and PSF size versus modeled size plots show that the modeling of the PSF is good enough for precision five-band photometry. Structure in the PSF model versus magnitude plot indicates minor PSF modeling errors, leading to misclassification of stars as galaxies, as verified using VVDS spectroscopy. There are a variety of uses for this wide-angle deep imaging data, including galactic structure, photometric redshift computation, cluster finding and cross wavelength measurements, weak lensing cluster mass calibrations, and cosmic shear measurements.

Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system PointSpreadFunction (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

Purpose: Developing a fast and accurate calculation model to reconstruct the applied photon fluence from an external photon radiation therapy treatment based on an image recorded by an electronic portal image device (EPID). Methods: To reconstruct the initial photon fluence the 2D EPID image was corrected for scatter from the patient/phantom and EPID to generate the transmitted primary photon fluence. This was done by an iterative deconvolution using precalculated pointspreadfunctions (PSF). The transmitted primary photon fluence was then backprojected through the patient/phantom geometry considering linear attenuation to receive the initial photon fluence applied for the treatment.The calculation model was verified using Monte Carlo simulations performed with the EGSnrc code system. EPID images were produced by calculating the dose deposition in the EPID from a 6 MV photon beam irradiating a water phantom with air and bone inhomogeneities and the ICRP anthropomorphic voxel phantom. Results: The initial photon fluence was reconstructed using a single PSF and position dependent PSFs which depend on the radiological thickness of the irradiated object. Appling position dependent pointspreadfunctions the mean uncertainty of the reconstructed initial photon fluence could be reduced from 1.13 % to 0.13 %. Conclusion: This study presents a calculation model for fluence reconstruction from EPID images. The{sup Result} show a clear advantage when position dependent PSF are used for the iterative reconstruction. The basic work of a reconstruction method was established and further evaluations must be made in an experimental study.

The use of spherical aberration correctors in the scanning transmission electron microscope (STEM) has the effect of reducing the depth of field of the microscope, making three-dimensional imaging of a specimen possible by optical sectioning. Depth resolution can be improved further by placing aberration correctors and lenses pre and post specimen to achieve an imaging mode known as scanning confocal electron microscopy (SCEM). We present the calculated incoherent pointspreadfunctions (PSF) and optical transfer functions (OTF) of a STEM and SCEM. The OTF for a STEM is shown to have a missing cone region which results in severe blurring along the optic axis, which can be especially severe for extended objects. We also present strategies for reconstruction of experimental data, such as three-dimensional deconvolution of the pointspreadfunction.

Prostacyclin-stimulating factor (PSF) acts on vascular endothelial cells to stimulate the synthesis of the vasodilatory molecule prostacyclin (PGI2). We have examined the expression, regulation, and hemodynamic bioactivity of PSF both in whole retina and in cultured cells derived from this tissue. PSF was expressed in all retinal cell types examined in vitro, but immunohistochemical analysis revealed PSF mainly associated with retinal vessels. PSF expression was constitutive in retinal pericy...

The decay from excited levels in medium and heavy nuclei can be described in a statistical approach by means of Photon Strength Functions and Level Density distributions combined with the theory of the compound. The study of electromagnetic cascades following neutron capture by means of high efficiency detectors has been shown to be well suited for probing the properties of the Photon Strength Function of heavy (high level density) and/or radioactive (high background) nuclei. In this work we have investigated for the first time the validity of the recommended PSF for actinides, in particular 235U, 238Np and 241Pu. Our study includes the search for resonance structures in the PSF below Sn and draws conclusions regarding their existence and their characteristics in terms of energy, width and electromagnetic nature.

Intraoperative abnormalities of coagulation function may occur for various reasons. In most scenarios, treatment is directed by laboratory parameters. Unfortunately, standard laboratory testing may take 1-2 h. The purpose of the current study was to evaluate a point-of-care testing device (CoaguChek ® XS System) in pediatric patients. Patients ranging in age from 2 to 18 years, undergoing posterior spinal fusion (PSF) or cardiac surgery using cardiopulmonary bypass (CPB) were eligible for inclusion. After CPB and/or the surgical procedure, 2.8 ml of blood was obtained and simultaneously tested on both the standard laboratory apparatus and the CoaguChek ® XS System. The study cohort consisted of 100 patients (50 PSF and 50 cardiac cases) with 13 cases excluded, leaving 87 patients (49 PSF and 38 cardiac cases) for analysis. In PSF cases, reference laboratory international normalized ratio (INR) ranged from 0.98 to 1.77 while CoaguChek ® XS INR ranged from 1.0 to 1.3. The correlation coefficient was 0.69. The results of the Bland-Altman analysis showed a bias of 0.09, precision of 0.1, and 95% limits of agreement ranging from -0.11 to 0.28. In cardiac cases, reference INR ranged from 1.68 to 14.19, while CoaguChek ® XS INR ranged from 1.4 to 7.9. The correlation coefficient was 0.35. The results of the Bland-Altman analysis showed a bias of -1.8, precision of 2.1, and 95% limits of agreement ranging from -6.0 to 2.4. INR values obtained from CoaguChek ® XS showed a moderate correlation with reference laboratory values within the normal range. However, in the presence of coagulopathy, the discrepancy was significantly greater, thereby making the CoaguChek ® XS clinically unreliable.

The modulation transfer function (MTF) is the normalized spatial frequency representation of the pointspreadfunction (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

Full Text Available Este artigo desenvolve uma análise crítica das implicações de se definir a família como objeto da intervenção em saúde, tomando como referência o caso do Programa de Saúde da Família (PSF, propondo, simultaneamente, uma avaliação do impacto sócio-cultural do programa. Espaço estratégico de manifestação, enfrentamento e, conseqüentemente, observação do processo saúde-doença, a família demanda uma abordagem multidisciplinar de sua estrutura, dinâmica e comportamento em face dos problemas, determinantes e ações de saúde. Neste contexto, apresentamos uma proposta de avaliação do Programa de Saúde da Família assentada sobre a idéia de que tanto os problemas, quanto as práticas de saúde, são realidades sócio-culturalmente constituídas.This paper develops a critical analysis of the implications of defining the family as an object of intervention in health, taking as reference case the Family Health Program and proposing an evaluation of its socio-cultural impact. As a strategic space for manifestation, confrontation, and therefore observation of the health-illness process, the family requires a multidisciplinary approach to its structure, dynamics, and behavior in the face of health-related problems, determinants, and actions. We present a proposal for assessment of the Family Health Program based on the premise that problems and practices in the health field are socio-culturally determined.

Full Text Available The study was carried out to investigate the effects of inhaled Mg alone and associated with F in the treatment of bronchial hyperresponsiveness. 43 male Wistar rats were randomly divided into four groups and exposed to inhaled NaCl 0.9%, MeCh, MgSO4 and MgF2. Pulmonary changes were assessed by means of functional tests and quantitative histological examination of lungs and trachea. Results revealed that delivery of inhaled Mg associated with F led to a significant decrease of total lung resistance better than inhaled Mg alone (p

P-channel CCD imagers, 200-300um thick, fully depleted, and back-illuminat ed are being developed for scientific applications including ground- and space-based astronomy and x-ray detection. These thick devices have extended IR response, good point-spreadfunction (PSF) and excellent radiation tolerance. Initially, these CCDs were made in-house at LBNL using 100 mm diameter wafers. Fabrication on high-resistivity 150 mm wafers is now proceeding according to a model in which the wafers are fir...

A Monte Carlo simulation program named MCPEP has been developed. Based on the existing simulation program that simulates the transfer of X-ray photons and the secondary electrons, MCPEP also simulates the light photons in the screen. The performances of an intensifying screen (Gd sub 2 O sub 2 S : Tb) with different thickness and different X-ray energies have been analyzed by MCPEP. The calculated light photon probability distribution, average light photon number per absorbed X-ray photon, statistical factor for light emission, X-ray detection efficiency, detective quantum efficiency (DQE) and pointspreadfunction (PSF) of the screen are presented.

A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the PointSpreadFunction (PSF) of the used optics, the integration time of the camera, and camera sensor specifications like pixel size, quantum efficiency and full well capacity. Effects of the read-out circuit of the camera are not incorporated. The model was evaluated w...

Comprehensive evaluation of retinal image quality requires that light scatter as well as optical aberrations be considered. In investigating how retinal image degradation affects eye growth in the chick model of myopia, we developed a simple method based on Shack-Hartmann images for evaluating the effects of both monochromatic aberrations and light scatter on retinal image quality. We further evaluated our method in the current study by applying it to data collected from both normal chick eyes and albino eyes that were expected to show increased intraocular light scatter. To analyze light scatter in our method, each Shack-Hartmann dot is treated as a local pointspreadfunction (PSF) that is the convolution of a local scatter PSF and a lenslet diffraction PSF. The local scatter PSF is obtained by de-convolution, and is fitted with a circularly symmetric Gaussian function using nonlinear regressions. A whole-eye scatter PSF also can be derived from the local scatter PSFs for the analyzed pupil. Aberrations are analyzed using OSA standard Zernike polynomials, and aberration-related PSF calculated from reconstructed wavefront using fast Fourier transform. Modulation transfer functions (MTFs) are computed separately for aberration and scatter PSFs, and a whole-eye MTF is derived as the product of the two. This method was applied to 4 normal and 4 albino eyes. Compared to normal eyes, albino eyes were more aberrated and showed greater light scatter. As a result, overall retinal image degradation was much greater in albino eyes than in normal eyes, with the relative contribution to retinal image degradation of light scatter compared to aberrations also being greater for albino eyes.

Comparing to SPECT and PET, a Compton camera based on electronic collimation has advantages of easy mobility, close-up scan, and simultaneous multi-tracer imaging for radiation therapy, molecular and nuclear medical applications. However, the spatial resolution of the Compton camera suffers from the measurement uncertainties of interaction positions and energies. Moreover, the degradation degree of the spatial resolution is shift-variant over field-of-view (FOV) due to the imaging principle based on the conical surface integration. In this study, the shift-variant pointspreadfunction (SV-PSF) is estimated from the measured resolution using 35 point sources in FOV and is incorporated into the system matrix of fully three-dimensional and accelerated reconstruction, i.e. list-mode OSEM (LMOSEM) algorithm, for resolution recovery. The measured resolution of the 35 point sources were fitted into the exponential function of radial (r) and axial (d) distances, f(r,d)=A*exp(Br+Cd). The coefficients (A, B, C) for fitting surface function of SV-PSF were not identical between x-axis (5.8, 0.0032, 0.019) and yz-palne (6.1, 0.0022, 0.013). LMOSEMs with SV-PSF of 2 point sources yielded more improved resolution over the FOV than LMOSEMs without PSF and with shift invariant PSF. The Compton camera can perform the volumetric and multi-tracer imaging in molecular and nuclear medical applications with the improved spatial resolution by LMOSEM with SV-PSF. (orig.)

Fast image restoration method is proposed for vibration image deblurring based on coded exposure and vibration detection. The criterion of the code sequence selection is discussed in detail, and several factors are considered to search for the optimal coded exposure sequence. The blurred vibration image is obtained by the coded exposure technique. Meanwhile, the vibration track information of the camera is detected by a fiber-optic gyroscope. The pointspreadfunction (PSF) is estimated using a statistical method with the selected code sequence and vibration track information. Finally, the blurred image is quickly restored with the estimated PSF through a direct inverse filtering method. Simulation experiments are conducted to test the performance of the approach with different vibration forms. A real imaging system is constructed to verify the effectiveness of the proposed algorithm. Experimental results show that the presented algorithm could yield better subjective experiences and superior objective evaluation values.

Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the pointspreadfunction (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the pointspreadfunction (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

The potential of a next-generation ground-based gamma-ray telescope to perform morphological studies of celestial gamma-ray sources is investigated. With this aim, general analytical expressions for the instrument response are derived and simulations of isolated source are used as a benchmark to understand the telescope performance. The morphology is represented assuming an ideal Gaussian pointspreadfunction (PSF) and a non-Gaussian PSF with extended tails. The response of the telescope is also tested in case of complex environments. In particular, the effect of locating the source (i) nearby a second one and (ii) on top of a diffuse halo-type object is investigated. The first scenario is particularly interesting in the framework of Galactic objects, where the presence of more than one single source in the same field of view (FoV) is expected. The latter represents a relevant study in the contest of extended extra-galactic sources surrounding AGNs.

displacement estimation using images with lateral oscillations, it is necessary to reduce both the wavelength of the lateral oscillations and the width of the pointspreadfunction (PSF). This is reached in this work, by doing emit and receive beamforming using synthetic aperture data. We show......In this paper we present a beamforming technique based on synthetic aperture imaging that enables to improve the radio-frequency (RF) ultrasound images with lateral oscillations for lateral displacement estimation. As described in previous work, in order to increase the accuracy of the lateral...... that the wavelength of the lateral oscillations can be reduced by a factor 2, and the width of the PSF can be reduced by a factor radic2. We have used the images obtained by this beamforming technique for lateral displacement estimation in the field of elastography. We show that with this new approach it is possible...

We compare side lobe suppression methods for nonlinear superresolution optical microscopy using phase masked excitation beams. The excitation pointspreadfunction (PSF) can be engineered by introducing a phase mask for superresolution microscopy. By applying a single π phase step to the excitation the central spot can be narrowed and provide improved lateral resolution. However, the energy redistribution leads to side lobes with increased intensity that complicates imaging applications. Several methods have been implemented to suppress the strength of the side lobes including confocal detection and utilizing beams with different phase masks in multiphoton microscopy. Side lobe suppression methods using confocal detection and different phase masks for the excitation beams are compared theoretically and experimentally. These results demonstrate the additional flexibility for PSF engineering for nonlinear optical processes.

Optical Transition Radiation (OTR) is emitted when a charged particle crosses the interface between two media with different dielectric properties. It has become a standard tool for beam imaging and transverse beam size measurements. At the KEK Accelerator Test Facility 2 (ATF2), OTR is used at the beginning of the final focus system to measure micrometre beam size using the visibility of the OTR PointSpreadFunction (PSF). In order to study in detail the PSF and improve the resolution of the monitor, a novel simulation tool has been developed. Based on the physical optic propagation mode of ZEMAX, the propagation of the OTR electric field can be simulated very precisely up to the image plane, taking into account aberrations and diffraction. This contribution presents the comparison between Zemax simulations and measurements performed at ATF2.

A three-dimensional wide-field image of a small fluorescent bead contains more than enough information to accurately calculate the wavefront in the microscope objective back pupil plane using the phase retrieval technique. The phase-retrieved wavefront can then be used to set a deformable mirror to correct the point-spreadfunction (PSF) of the microscope without the use of a wavefront sensor. This technique will be useful for aligning the deformable mirror in a widefield microscope with adaptive optics and could potentially be used to correct aberrations in samples where small fluorescent beads or other point sources are used as reference beacons. Another advantage is the high resolution of the retrieved wavefont as compared with current Shack-Hartmann wavefront sensors. Here we demonstrate effective correction of the PSF in 3 iterations. Starting from a severely aberrated system, we achieve a Strehl ratio of 0.78 and a greater than 10-fold increase in maximum intensity.

Ink-jet printers are frequently used in crime such as counterfeiting bank notes, driving licenses, and identification cards. Police investigators required us to identify makers or brands of ink-jet printers from counterfeits. In such demands, classifying ink-jet printers according to spur marks which were made by spur gears located in front of print heads for paper feed has been addressed by document examiners. However, spur marks are significantly faint so that it is difficult to detect them. In this study, we propose the new method for detecting spur marks using a multiband scanner in near infrared (NIR) mode and estimations of pointspreadfunction (PSF). As estimating PSF we used cepstrum which is inverse Fourier transform of logarithm spectrum. The proposed method provided the clear image of the spur marks.

Full Text Available The resolution of ultrasound medical images is yet an important problem despite of the researchers efforts. In this paper we presents a nonlinear blind deconvolution to eliminate the blurring effect based on the measured radio-frequency signal envelope. This algorithm is executed in two steps. Firslty we make an estimation for PointSpreadFunction (PSF and, secondly we use the estimated PSF to remove, iteratively their effect. The proposed algorithm is a greedy algorithm, called also matching pursuit or CLEAN. The use of this algorithm is motivated beacause theorically it avoid the so called inverse problem, which usually needs regularization to obtain an optimal solution. The results are presented using 1D simulated signals in term of visual evaluation and nMSE in comparison with the two most kwown regularisation solution methods for least square problem, Thikonov regularization or l2-norm and Total Variation or l1 norm.

We present and characterize the catalog of galaxy shape measurements that will be used for cosmological weak lensing measurements in the Wide layer of the first year of the Hyper Suprime-Cam (HSC) survey. The catalog covers an area of 136.9 deg2 split into six fields, with a mean i-band seeing of 0{^''.}58 and 5σ point-source depth of i ˜ 26. Given conservative galaxy selection criteria for first-year science, the depth and excellent image quality results in unweighted and weighted source number densities of 24.6 and 21.8 arcmin-2, respectively. We define the requirements for cosmological weak lensing science with this catalog, then focus on characterizing potential systematics in the catalog using a series of internal null tests for problems with point-spreadfunction (PSF) modeling, shear estimation, and other aspects of the image processing. We find that the PSF models narrowly meet requirements for weak lensing science with this catalog, with fractional PSF model size residuals of approximately 0.003 (requirement: 0.004) and the PSF model shape correlation function ρ1 1° scales that are sufficiently large as to require mitigation in cosmic shear measurements. Finally, we discuss the dominant systematics and the planned algorithmic changes to reduce them in future data reductions.

The Astronomical Röntgen Telescope X-ray Concentrator (ART-XC) is a hard X-ray telescope with energy response up to 30 keV, to be launched on board the Spectrum Röntgen Gamma (SRG) spacecraft in 2018. ART-XC consists of seven identical co-aligned mirror modules. Each mirror assembly is coupled with a CdTe double-sided strip (DSS) focal-plane detector. Eight X-ray mirror modules (seven flight and one spare units) for ART-XC were developed and fabricated at the Marshall Space Flight Center (MSFC), NASA, USA. We present results of testing procedures performed with an X-ray beam facility at MSFC to calibrate the pointspreadfunction (PSF) of the mirror modules. The shape of the PSF was measured with a high-resolution CCD camera installed in the focal plane with defocusing of 7 mm, as required by the ART-XC design. For each module, we performed a parametrization of the PSF at various angular distances Θ. We used a King function to approximate the radial profile of the near on-axis PSF (Θ modules at the level of 10%. The on-axis angular resolution of the ART-XC optics varies between 27 and 33 arcsec (half-power diameter), except for the spare module.

The latest generation of nano computed tomography (nano-CT) systems with sub-micrometer focus X-ray source is expected to yield non-invasive imaging of internal microstructure of objects with isotropic spatial resolution in the range of hundreds nanometers. Most recently commercial systems have become available for purchase. The quantitative characterization of the performance of nano-CT systems is important for evaluating the accuracy of size and density measurements of fine details in nano-CT images. The pointspreadfunction (PSF) and modulation transfer function (MTF) are calculated most commonly from the measurement of thin wire phantom for measuring the spatial resolution of clinical CT systems. However, a consistent method for describing the spatial resolution of nano-CT has not been utilized due to the requirement of a nanowire which is a wire of diameter of the order of tens of nanometers. This paper presents a method to characterize the spatial resolution in x/y-scan plane (transversal orientation) of nano-CT systems using a relatively large microwire in the PSF measurement. In this method, the MTF computed from the PSF is estimated on the basis of a two-Gaussian PSF model. Experimenting with microwire images with three different diameter sizes (3μm, 10μm, 30μm) obtained by the synchrotron radiation CT, we demonstrate the potential usefulness of the method for describing the spatial resolutions of nano-CT systems.

Removing the aberrations introduced by the pointspreadfunction (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.

Circular Synthetic Aperture Sonar (CSAS) processing computes coherently Synthetic Aperture Sonar (SAS) data acquired along a circular trajectory. This approach has a number of advantages, in particular it maximises the aperture length of a SAS system, producing very high resolution sonar images. CSAS image reconstruction using back-projection algorithms, however, introduces a dissymmetry in the impulse response, as the imaged point moves away from the centre of the acquisition circle. This paper proposes a sampling scheme for the CSAS image reconstruction which allows every point, within the full field of view of the system, to be considered as the centre of a virtual CSAS acquisition scheme. As a direct consequence of using the proposed resampling scheme, the pointspreadfunction (PSF) is uniform for the full CSAS image. Closed form solutions for the CSAS PSF are derived analytically, both in the image and the Fourier domain. The thorough knowledge of the PSF leads naturally to the proposed adapted atom waves basis for CSAS image decomposition. The atom wave deconvolution is successfully applied to simulated data, increasing the image resolution by reducing the PSF energy leakage.

Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference pointspreadfunction (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.

The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the pointspreadfunction (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey (SDSS) r-band images with artificial AGN point sources added which are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source PS is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover PS and host galaxy magnitudes with smaller systematic error and a lower average scatter (49%). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ±50% if it is trained on multiple PSF's. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN it is more robust and easy to use than parametric methods as it requires no input parameters.

Wavefront coding paradigm can be used not only for compensation of aberrations and depth-of-field improvement but also for an optical encryption. An optical convolution of the image with the PSF occurs when a diffractive optical element (DOE) with a known pointspreadfunction (PSF) is placed in the optical path. In this case, an optically encoded image is registered instead of the true image. Decoding of the registered image can be performed using standard digital deconvolution methods. In such class of optical-digital systems, the PSF of the DOE is used as an encryption key. Therefore, a reliability and cryptographic resistance of such an encryption method depends on the size and complexity of the PSF used for optical encoding. This paper gives a preliminary analysis on reliability and possible vulnerabilities of such an encryption method. Experimental results on brute-force attack on the optically encrypted images are presented. Reliability estimation of optical coding based on wavefront coding paradigm is evaluated. An analysis of possible vulnerabilities is provided.

Commonly in neutron image experiments, the interpretation of the pointspreadfunction (PSF) is limited to describing the achievable spatial resolution in an image. In this article it is shown that for various PSF models, the resulting blurring due to the PSF affects the quantification of the neutron transmission of an object and that the effect is separate from the scattered neutron field from the sample. The effect is observed in several neutron imaging detector configurations using different neutron scintillators and light sensors. In the context of estimation of optical densities with an algorithm that assumes a parallel beam, the effect of blurring fractionates the neutron signal spatially and introduces an effective background that scales with the area of the detector illuminated by neutrons. Examples are provided that demonstrate that the illuminated field of view can alter the observed neutron transmission for nearly purely absorbing objects. It is found that by accurately modeling the PSF, image restoration methods can yield more accurate estimates of the neutron attenuation by an object.

Commonly in neutron image experiments, the interpretation of the pointspreadfunction (PSF) is limited to describing the achievable spatial resolution in an image. In this article it is shown that for various PSF models, the resulting blurring due to the PSF affects the quantification of the neutron transmission of an object and that the effect is separate from the scattered neutron field from the sample. The effect is observed in several neutron imaging detector configurations using different neutron scintillators and light sensors. In the context of estimation of optical densities with an algorithm that assumes a parallel beam, the effect of blurring fractionates the neutron signal spatially and introduces an effective background that scales with the area of the detector illuminated by neutrons. Examples are provided that demonstrate that the illuminated field of view can alter the observed neutron transmission for nearly purely absorbing objects. It is found that by accurately modeling the PSF, image restoration methods can yield more accurate estimates of the neutron attenuation by an object.

Sparse delta function series occur as data in many chemical analyses and seismic methods. These original data are often sufficiently degraded by the recording instrument response that the individual delta function peaks are difficult to distinguish and measure. A method, which has been used to measure these peaks, is to fit a parameterized model by a nonlinear least-squares fitting algorithm. The deconvolution approaches described have the advantage of not requiring a parameterized pointspreadfunction, nor do they expect a fixed number of peaks. Two new methods are presented. The maximum power technique is reviewed. A maximum a posteriori technique is introduced. Results on both simulated and real data by the two methods are presented. The characteristics of the data can determine which method gives superior results. 5 figures

Full text: The 4Pi-confocal fluorescence microscope is a recently developed 3D imaging technique in which two opposing high-NA objectives are used for coherently illuminating and/or detecting the same point of the fluorescent sample. The interference process yields an intensity pointspreadfunction (PSF) with an extremely narrow axial core, but with very large axial sidelobes, which compromise the actual improvement in axial resolution. To overcome this problem we propose the use, in the illumination arm of the 4Pi-confocal microscope, of multiple-zones phase filters whose design is based on the Toraldo-design principle. Note that the Toraldo procedure allows to select at will the positions of the zeros of the PSF of an optical system. Then, what we propose here if to design a phase pupil filter such that the position of the first zero of the illumination axial PSF is close to the position of the maximum of the first axial sidelobe of the detection PSF. In the design procedure it is taken into account that: 1. The value of the parameter ε = λ exc /λ det which, in a single-photon fluorescent process, is the responsible for the different scales of the illumination and detection PSFs. 2. The Toraldo procedure was originally designed to control the position of zeros of the transverse PSF. In this case the procedure is adapted to the aim of controlling the position of zeros of the axial PSF. 3. Since 4Pi-confocal microscopes are only useful when built with high-NA objectives, the Toraldo principle is reformulated in terms of the nonparaxial diffraction theory. We show that by using Toraldo filters in the illumination part of a 4Pi-confocal microscope it is possible to obtain up to a 60% reduction of height of the axial sidelobe of the whole-system axial PSF. This fact permits to fully benefit the axial resolution from the strong narrowness of the central peak of the axial PSF, inherent to the 4Pi principle. Copyright (2002) Australian Society for Electron Microscopy

Measuring and incorporating a scanner-specific pointspreadfunction (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The

Purpose: Measuring and incorporating a scanner-specific pointspreadfunction (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

The resolution of Single Molecule Localization Microscopy (SML) is dependent on the width of the PointSpreadFunction (PSF) and the number of photons collected. However, biological samples tend to degrade the shape of the PSF due to the heterogeneity of the index of refraction. In addition, there are aberrations caused by imperfections in the optical components and alignment, and the refractive index mismatch between the coverslip and the sample, all of which directly reduce the accuracy of SML. Adaptive Optics (AO) can play a critical role in compensating for aberrations in order to increase the resolution. However the stochastic nature of single molecule emission presents a challenge for wavefront optimization because the large fluctuations in photon emission do not permit many traditional optimization techniques to be used. Here we present an approach that optimizes the wavefront during SML acquisition by combining an intensity independent merit function with a Genetic algorithm (GA) to optimize the PSF despite the fluctuating intensity. We demonstrate the use of AO with GA in tissue culture cells and through ~50µm of tissue in the Drosophila Central Nervous System (CNS) to achieve a 4-fold increase in the localization precision.

Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the pointspreadfunctions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.

Two-step gamma cascades (TSCs) following the radiative capture of thermal neutrons in 77Se were measured at the research reactor at Řež near Prague. Results on photon strength functions (PSFs) of 78Se, obtained from comparison of experimental TSC spectra with outcomes of simulations under different assumptions about level density and PSFs using the DICEBOX algorithm, are presented. The main attention is paid to possible manifestation of the pygmy resonance observed recently in this nucleus in the nuclear resonance fluorescence measurement and low-energy PSF enhancement observed in Oslo-type experiments for all A ≲ 100 nuclei.

Weak lensing by large-scale structure is an invaluable cosmological tool given that most of the energy density of the concordance cosmology is invisible. Several large ground-based imaging surveys will attempt to measure this effect over the coming decade, but reliable control of the spurious lensing signal introduced by atmospheric turbulence and telescope optics remains a challenging problem. We address this challenge with a demonstration that pointspreadfunction (PSF) effects on measured galaxy shapes in the Sloan Digital Sky Survey (SDSS) can be corrected with existing analysis techniques. In this work, we co-add existing SDSS imaging on the equatorial stripe in order to build a data set with the statistical power to measure cosmic shear, while using a rounding kernel method to null out the effects of the anisotropic PSF. We build a galaxy catalogue from the combined imaging, characterize its photometric properties and show that the spurious shear remaining in this catalogue after the PSF correction is negligible compared to the expected cosmic shear signal. We identify a new source of systematic error in the shear-shear autocorrelations arising from selection biases related to masking. Finally, we discuss the circumstances in which this method is expected to be useful for upcoming ground-based surveys that have lensing as one of the science goals, and identify the systematic errors that can reduce its efficacy.

Optical navigation is one of the most promising technologies in the area of deep space autonomous navigation. However, since the optical images are strict motion blurred images, it is difficult to extract the lines of sight (LOS) to the beacons to reckon the spacecraft attitude and orbital position during the deep space cruise phase. This paper proposes a new blind restoration approach to effectively recover the clear image. We use a modified median filter to eliminate the black and white noises, and also, construct a blind estimation model, built on the sparsity of the gradient of navigation image, to estimate the global pointspreadfunction (PSF). Moreover, we select a few bright beacons to recover the motion blurred image, where the average value of the beacon centroid is adopted to calculate the relative position for the optical navigation. We present the simulation and actual image restoration experiment to demonstrate the accuracy and consistency of the relative position of the recovered navigation image of the proposed method. We show that the proposed method presents superior performance in comparison with the multiple cross correlation method. The estimated PSF is close to real PSF and the distributed energy of the beacon is concentrated ensuring a high SNR, where the accuracy of the relative position is higher than 0.1 pixel.

The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static pointspreadfunction (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

Ground-based imagers at 8 m class telescopes assisted by multi-conjugate adaptive optics are primary facilities with which to obtain accurate photometry and proper motions in dense stellar fields. We observed the central region of the globular clusters Liller 1 and NGC 6624 with the Gemini Multi-conjugate Adaptive Optics System (GeMS) feeding the Gemini South Adaptive Optics Imager (GSAOI) currently available at the Gemini South telescope, under different observing conditions. We characterized the stellar point-spreadfunction (PSF) in terms of FWHM, Strehl ratio (SR), and encircled energy (EE), over the field of view (FOV). We found that, for sub-arcsecond seeing at the observed airmass, we can obtain the diffraction-limited PSF (FWHM ≈ 80 mas), SR ˜ 40%, and EE ≥ 50% with a dispersion around 10% over the FOV of 85″ × 85″, in the K s band. In the J band the best images provide FWHMs between 60 and 80 mas, SR \\gt 10 % , and {EE}\\gt 40 % . For seeing at the observed airmass exceeding 1″, the performance worsens but it is still possible to perform PSF fitting photometry with 25% EE in J and 40% in K s . We also computed the geometric distortions of GeMS/GSAOI and we obtained corrected images with an astrometric accuracy of ˜1 mas in a stellar field with high crowding.

Dynamic contrast-enhanced MRI (or DCE-MRI) is a useful tool for measuring blood flow and perfusion, and it has found use in the study of pulmonary perfusion in animal models. However, DCE-MRI experiments are difficult in small animals such as rats. A recently developed method known as Interleaved Radial Imaging and Sliding window-keyhole (IRIS) addresses this problem by using a data acquisition scheme that covers (k,t)-space with data acquired from multiple bolus injections of a contrast agent. However, the temporal resolution of IRIS is limited by the effects of temporal averaging inherent in the sliding window and keyhole operations. This article describes a new method to cover (k,t)-space based on the theory of partially separable functions (PSF). Specifically, a sparse sampling of (k,t)-space is performed to acquire two data sets, one with high-temporal resolution and the other with extended k-space coverage. The high-temporal resolution training data are used to determine the temporal basis functions of the PSF model, whereas the other data set is used to determine the spatial variations of the model. The proposed method was validated by simulations and demonstrated by an experimental study. In this particular study, the proposed method achieved a temporal resolution of 32 msec.

The PSF and CPMPSF based ultrafiltration membranes were prepared according to phase–inversion process using water as nonsolvent at 4° and 15 °C, employing casting dope having different amounts of polymer (PSF or CPMPSF), polyvinylpyrrolidone (PVP) and solvent, dimethylformamide (DMF). The membranes were ...

The problem of improving the resolution of NOAA AVHRR images, which is no higher than 1 X 1 km2/pixel, is solved. The a priori information invoked for solution of this problem includes the assumption that the initial video data, first, have the higher (subpixel) resolution as compared to the AVHRR data. In addition, it is assumed that the recorded images are formed by a scanning spot with a fixed, but unknown, pupil function. A salient feature of the proposed approach is adaptive reconstruction of a 'spreading-smoothing' operator, which is, in its turn, the convolution of two point-spreadfunctions (PSF), on describing the instrumental function of the scanner pupil and another simulating the interpolation method used for reconstruction of the missing values of radio brightness of the subpixel raster. Some examples of the improved resolution of real images are presented.

In this paper, we present results from the weak-lensing shape measurement GRavitational lEnsing Accuracy Testing 2010 (GREAT10) Galaxy Challenge. This marks an order of magnitude step change in the level of scrutiny employed in weak-lensing shape measurement analysis. We provide descriptions of each method tested and include 10 evaluation metrics over 24 simulation branches. GREAT10 was the first shape measurement challenge to include variable fields; both the shear field and the pointspreadfunction (PSF) vary across the images in a realistic manner. The variable fields enable a variety of metrics that are inaccessible to constant shear simulations, including a direct measure of the impact of shape measurement inaccuracies, and the impact of PSF size and ellipticity, on the shear power spectrum. To assess the impact of shape measurement bias for cosmic shear, we present a general pseudo-Cℓ formalism that propagates spatially varying systematics in cosmic shear through to power spectrum estimates. We also show how one-point estimators of bias can be extracted from variable shear simulations. The GREAT10 Galaxy Challenge received 95 submissions and saw a factor of 3 improvement in the accuracy achieved by other shape measurement methods. The best methods achieve sub-per cent average biases. We find a strong dependence on accuracy as a function of signal-to-noise ratio, and indications of a weak dependence on galaxy type and size. Some requirements for the most ambitious cosmic shear experiments are met above a signal-to-noise ratio of 20. These results have the caveat that the simulated PSF was a ground-based PSF. Our results are a snapshot of the accuracy of current shape measurement methods and are a benchmark upon which improvement can be brought. This provides a foundation for a better understanding of the strengths and limitations of shape measurement methods.

Context: This is the first paper of a series describing our measurement of weak lensing by large-scale structure, also termed “cosmic shear”, using archival observations from the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). Aims: In this work we present results from a pilot study testing the capabilities of the ACS for cosmic shear measurements with early parallel observations and presenting a re-analysis of HST/ACS data from the GEMS survey and the GOODS observations of the Chandra Deep Field South (CDFS). Methods: We describe the data reduction and, in particular, a new correction scheme for the time-dependent ACS point-spread-function (PSF) based on observations of stellar fields. This is currently the only technique which takes the full time variation of the PSF between individual ACS exposures into account. We estimate that our PSF correction scheme reduces the systematic contribution to the shear correlation functions due to PSF distortions to observations using 96 galaxies arcmin-2 with the photometric redshift catalogue of the GOODS-MUSIC sample, we determine a local single field estimate for the mass power spectrum normalisation σ8, CDFS=0.52+0.11-0.15 (stat) ± 0.07(sys) (68% confidence assuming Gaussian cosmic variance) at a fixed matter density Ω_m=0.3 for a ΛCDM cosmology marginalising over the uncertainty of the Hubble parameter and the redshift distribution. We interpret this exceptionally low estimate to be due to a local under-density of the foreground structures in the CDFS. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archives at the Space Telescope European Coordinating Facility and the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

Full Text Available Chronic Heart Failure (CHF is a debilitating illness commonly encountered in primary care. Its prevalence in developing countries is rising as a result of an ageing population, and an escalating epidemic of hypertension, type 2 diabetes and coronary heart disease. CHF can be specifically diagnosed as Heart Failure with Reduced Systolic Function (HF-RSF or Heart Failure with Preserved Systolic Function (HF-PSF. This paper illustrates a common presentation of HF-PSF in primary care; and critically appraises the evidence in support of its diagnosis, prognosis and management. Regardless of the specific diagnosis, long term management of CHF is intricate as it involves a complex interplay between medical, psychosocial, and behavioural factors. Hence, there is a pressing need for a multidisciplinary team management of CHF in primary care, and this usually takes place within the broader context of an integrated chronic disease management programme. Primary care physicians are ideally suited to lead multidisciplinary teams to ensure better co-ordination, continuity and quality of care is delivered for patients with chronic conditions across time and settings. Given the rising epidemic of cardiovascular risk factors in the Malaysian population, preventive strategies at the primary care level are likely to offer the greatest promise for reducing the growing burden of CHF.

We have calculated the aperture function of a positron computed tomograph (PCT) with computer simulation, and evaluated the axial resolution of a multislice PCT, Positologica III, both theoretically and experimentally. The axial pointspreadfunction (PSF) was approximately a triangle at or near the center of the field, and the sensitivity for the slice decreased significantly as the source moved away off the image plane. Accordingly, there were low sensitivity areas between an in-plane and the adjacent cross-plane. This invisible region was clinically significant if the object was thin enough in the z-axis. In order to fill up the gaps between adjacent slices, it is valuable to move the patient half the slice interval in the z-axis and perform an " interpolating scan."

A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the pointspreadfunction (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.

Full Text Available Purpose. To compare the visual quality of patients with keratoconus who underwent penetrating keratoplasty (PKP or deep anterior lamellar keratoplasty (DALK with fluid dissection. Design. Cross-sectional, observational study. Methods. Twelve eyes that underwent PKP (PKP group were compared to 24 eyes that underwent DALK (DALK group after complete removal of sutures and stability of refraction. Visual, refractive, corneal topographic, corneal aberrometry, and ocular aberrometry parameters were compared for both groups. The χ2 and Mann–Whitney U tests were used for comparisons as appropriate. P0.05, all comparisons. All aberrations, pointspreadfunctions (PSF, and the modulation transfer function (MTF were not statistically different between groups (P>0.05. Conclusion. For our small study, the postoperative PKP and DALK with fluid dissection patient groups had vision/optical quality parameters that were not statistically different. This may indicate that DALK with fluid dissection can replace PKP for keratoconus without compromising vision quality.

In this paper we apply a sparse signal recovery technique for synthetic aperture radar (SAR) image formation from interrupted phase history data. Timeline constraints imposed on multi-function modern radars result in interrupted SAR data collection, which in turn leads to corrupted imagery that degrades reliable change detection. In this paper we extrapolate the missing data by applying the basis pursuit denoising algorithm (BPDN) in the image formation step, effectively, modeling the SAR scene as sparse. We investigate the effects of regular and random interruptions on the SAR pointspreadfunction (PSF), as well as on the quality of both coherent (CCD) and non-coherent (NCCD) change detection. We contrast the sparse reconstruction to the matched filter (MF) method, implemented via Fourier processing with missing data set to zero. To illustrate the capabilities of the gap-filling sparse reconstruction algorithm, we evaluate change detection performance using a pair of images from the GOTCHA data set.

We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex pointspreadfunction (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

Blind image deconvolution (BID) aims to remove or reduce the degradations that have occurred during the acquisition or processing. It is a challenging ill-posed problem due to a lack of enough information in degraded image for unambiguous recovery of both pointspreadfunction (PSF) and clear image. Although recently many powerful algorithms appeared; however, it is still an active research area due to the diversity of degraded images as well as degradations. Closed-loop control systems are characterized with their powerful ability to stabilize the behavior response and overcome external disturbances by designing an effective feedback optimization. In this paper, we employed feedback control to enhance the stability of BID by driving the current estimation quality of PSF to the desired level without manually selecting restoration parameters and using an effective combination of machine learning with feedback optimization. The foremost challenge when designing a feedback structure is to construct or choose a suitable performance metric as a controlled index and a feedback information. Our proposed quality metric is based on the blur assessment of deconvolved patches to identify the best PSF and computing its relative quality. The Kalman filter-based extremum seeking approach is employed to find the optimum value of controlled variable. To find better restoration parameters, learning algorithms, such as multilayer perceptron and bagged decision trees, are used to estimate the generic PSF support size instead of trial and error methods. The problem is modeled as a combination of pattern classification and regression using multiple training features, including noise metrics, blur metrics, and low-level statistics. Multi-objective genetic algorithm is used to find key patches from multiple saliency maps which enhance performance and save extra computation by avoiding ineffectual regions of the image. The proposed scheme is shown to outperform corresponding open

Imaging thick biological samples introduces spherical aberration (SA) due to refractive index (RI) mismatch between specimen and imaging lens immersion medium. SA increases with the increase of either depth or RI mismatch. Therefore, it is difficult to find a static compensator for SA1. Different wavefront coding methods2,3 have been studied to find an optimal way of static wavefront correction to reduce depth-induced SA. Inspired by a recent design of a radially symmetric squared cubic (SQUBIC) phase mask that was tested for scanning confocal microscopy1 we have modified the pupil using the SQUBIC mask to engineer the pointspreadfunction (PSF) of a wide field fluorescence microscope. In this study, simulated images of a thick test object were generated using a wavefront encoded engineered PSF (WFEPSF) and were restored using space-invariant (SI) and depth-variant (DV) expectation maximization (EM) algorithms implemented in the COSMOS software4. Quantitative comparisons between restorations obtained with both the conventional and WFE PSFs are presented. Simulations show that, in the presence of SA, the use of the SIEM algorithm and a single SQUBIC encoded WFE-PSF can yield adequate image restoration. In addition, in the presence of a large amount of SA, it is possible to get adequate results using the DVEM with fewer DV-PSFs than would typically be required for processing images acquired with a clear circular aperture (CCA) PSF. This result implies that modification of a widefield system with the SQUBIC mask renders the system less sensitive to depth-induced SA and suitable for imaging samples at larger optical depths.

Hartmann-Shack sensor (HSS) has been used in objective measurement of human eye wave-front aberration, but the research on the effects of sampling point size on the accuracy of the result has not been reported. In this paper, pointspreadfunction (PSF) of the whole system mathematical model was obtained via measuring the optical imaging system structure of human eye wave-front aberration measurement. The impact of Airy spot size on the accuracy of system was analyzed. Statistics study show that the geometry of Airy spot size of the ideal light source sent from eye retina formed on the surface of HSS is far smaller than the size of the HSS sample point image used in the experiment. Therefore, the effect of Airy spot on the precision of the system can be ignored. This study theoretically and experimentally justifies the reliability and accuracy of human eye wave-front aberration measurement based on HSS.

This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-SpreadFunctions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

A novel approach to achieve the image restoration is proposed in which each detector's relative position in the detector array is no longer a necessity. We can identify each detector's relative location by extracting a certain area from one of the detector's image and scanning it on other detectors' images. According to this location, we can generate the pointspreadfunctions (PSF) for each detector and perform deconvolution for image restoration. Equipped with this method, the microscope with discretionally designed detector array can be easily constructed without the concern of exact relative locations of detectors. The simulated results and experimental results show the total improvement in resolution with a factor of 1.7 compared to conventional confocal fluorescence microscopy. With the significant enhancement in resolution and easiness for application of this method, this novel method should have potential for a wide range of application in fluorescence microscopy based on parallel detecting.

Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the pointspreadfunction (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.

This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the PointSpreadFunction (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

Virtual fluorescence emission difference microscopy (vFED) has been proposed recently to enhance the lateral resolution of confocal microscopy with a detector array, implemented by scanning a doughnut-shaped pattern. Theoretically, the resolution can be enhanced by around 1.3-fold compared with that in confocal microscopy. For further improvement of the resolving ability of vFED, a novel method is presented utilizing fluorescence saturation for super-resolution imaging, which we called saturated virtual fluorescence emission difference microscopy (svFED). With a point detector array, matched solid and hollow pointspreadfunctions (PSF) can be obtained by photon reassignment, and the difference results between them can be used to boost the transverse resolution. Results show that the diffraction barrier can be surpassed by at least 34% compared with that in vFED and the resolution is around 2-fold higher than that in confocal microscopy.

, movable, mechanical PET phantom to simulate patients' head movements while being scanned. This can be used for evaluating motion correction methods. A low-cost phantom controlled by a rotary stage motor was built and tested for axial rotations of 1 degrees - 10 degrees with the multiple acquisition frame...... method. The phantom is able to perform stepwise and continuous axial rotations with submillimeter accuracy, and the movements are repeatable. The scans were acquired on the high resolution research tomograph dedicated brain scanner. The scans were reconstructed with the new 3-D ordered subset expectation...... maximization algorithm with modeling of the pointspreadfunction (3DOSEM-PSF), and they were corrected for motions based on external tracking information using the Polaris Vicra real-time stereo motion-tracking system. The new automatic, movable phantom has a robust design and is a potential quality...

In this paper, we propose a method to simultaneously restore and to segment piecewise homogeneous images degraded by a known pointspreadfunction (PSF) and additive noise. For this purpose, we propose a family of nonhomogeneous Gauss-Markov fields with Potts region labels model for images to be used in a Bayesian estimation framework. The joint posterior law of all the unknowns (the unknown image, its segmentation (hidden variable) and all the hyperparameters) is approximated by a separable probability law via the variational Bayes technique. This approximation gives the possibility to obtain practically implemented joint restoration and segmentation algorithm. We will present some preliminary results and comparison with a MCMC Gibbs sampling based algorithm. We may note that the prior models proposed in this work are particularly appropriate for the images of the scenes or objects that are composed of a finite set of homogeneous materials. This is the case of many images obtained in nondestructive testing (NDT) applications.

We have investigated by simulations and experimentally the parameters that affect image quality (contrast and spatial-resolution) of the fast neutron TRION detector. A scintillating fiber screen with 0.5×0.5 mm 2 square fibers, few centimeters thick, provides superior spatial-resolution to that of a slab scintillator of the same thickness. A detailed calculation of the neutron interaction processes that influence the point-spreadfunction (PSF) in the scintillating screen has been performed using the GEANT 3.21 code. The calculations showed that neutron scattering within the screen accounts for a significant loss of image contrast. The factors that limit the spatial-resolution of the image are the cross-sectional scintillating-fiber dimensions within the screen and the spatial response of the image-intensifier. A deconvolution method has been applied for restoring the contrast and the spatial-resolution of the fast neutron image.

Traditionally two-dimensional scans are designed to support an isotropic field-of-view (iFOV). When imaging elongated objects, significant savings in scan time can potentially be achieved by supporting an elliptical field-of-view (eFOV). This work presents an empirical closed-form solution to adapt the PROPELLER trajectory for an eFOV. The proposed solution is built on the geometry of the PROPELLER trajectory permitting the scan prescription and data reconstruction to remain largely similar to standard PROPELLER. The achieved FOV is experimentally validated by the pointspreadfunction (PSF) of a phantom scan. The details of potential savings in scan time and the signal-to-noise ratio (SNR) performance in comparison to iFOV scans for both phantom and in-vivo images are also described.

Full Text Available The sustainability in a river with respect to water quality is critical because it is highly related with environmental pollution, economic expenditure, and public health. This study proposes a sustainability problem of wastewater treatment system for river ecosystem conservation which helps the healthy survival of the aquatic biota and human beings. This study optimizes the design of a wastewater treatment system using the parameter-setting-free harmony search algorithm, which does not require the existing tedious value-setting process for algorithm parameters. The real-scale system has three different options of wastewater treatment, such as filtration, nitrification, and diverted irrigation (fertilization, as well as two existing treatment processes (settling and biological oxidation. The objective of this system design is to minimize life cycle costs, including initial construction costs of those treatment options, while satisfying minimal dissolved oxygen requirements in the river, maximal nitrate-nitrogen concentration in groundwater, and a minimal nitrogen requirement for crop farming. Results show that the proposed technique could successfully find solutions without requiring a tedious setting process.

The PILATUS detector module was characterized in the PTB laboratory at BESSY II comparing modules with 320 μm thick and newly developed 450 μm and 1000 μm thick silicon sensors. Measurements were carried out over a wide energy range, in-vacuum from 1.75 keV to 8.8 keV and in air from 8 keV to 60 keV. The quantum efficiency (QE) was measured as a function of energy and the spatial resolution was measured at several photon energies both in terms of the modulation transfer function (MTF) from edge profile measurements and by directly measuring the pointspreadfunction (PSF) of a single pixel in a raster scan with a pinhole beam. Independent of the sensor thickness, the measured MTF and PSF come close to those for an ideal pixel detector with the pixel size of the PILATUS detector (172 × 172 μm 2 ). The measured QE follows the values predicted by calculation. Thicker sensors significantly enhance the QE of the PILATUS detectors for energies above 10 keV without impairing the spatial resolution and noise-free detection. In-vacuum operation of the PILATUS detector is possible at energies as low as 1.75 keV.

The PILATUS detector module was characterized in the PTB laboratory at BESSY II comparing modules with 320 μm thick and newly developed 450 μm and 1000 μm thick silicon sensors. Measurements were carried out over a wide energy range, in-vacuum from 1.75 keV to 8.8 keV and in air from 8 keV to 60 keV. The quantum efficiency (QE) was measured as a function of energy and the spatial resolution was measured at several photon energies both in terms of the modulation transfer function (MTF) from edge profile measurements and by directly measuring the pointspreadfunction (PSF) of a single pixel in a raster scan with a pinhole beam. Independent of the sensor thickness, the measured MTF and PSF come close to those for an ideal pixel detector with the pixel size of the PILATUS detector (172 × 172 μm2). The measured QE follows the values predicted by calculation. Thicker sensors significantly enhance the QE of the PILATUS detectors for energies above 10 keV without impairing the spatial resolution and noise-free detection. In-vacuum operation of the PILATUS detector is possible at energies as low as 1.75 keV.

To assess the power profile and in vitro optical quality of scleral contact lenses with different powers as a function of the optical aperture. The mini and semiscleral contact lenses (Procornea) were measured for five powers per design. The NIMO TR-1504 (Lambda-X) was used to assess the power profile and Zernike coefficients of each contact lens. Ten measurements per lens were taken at 3- and 6-mm apertures. Furthermore, the optical quality of each lens was described in Zernike coefficients, modulation transfer function, and pointspreadfunction (PSF). A convolution of each lens PSF with an eye-chart image was also computed. The optical power fluctuated less than 0.5 diopters (D) along the optical zone of each lens. However, the optical power obtained for some lenses did not match with its corresponding nominal one, the maximum difference being 0.5 D. In optical quality, small differences were obtained among all lenses within the same design. Although significant differences were obtained among lenses (Pscleral lens. Additionally, the optical quality of both lenses has showed to be independent of the lens power within the same aperture.

We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (pointspreadfunction) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

Full Text Available Subject of study. A vector model for conversion of electromagnetic radiation in optical systems is considered, taking into account the influence of birefringence, as well as partially coherent illumination. Model. The proposed model is based on the representation of the complex amplitude of the monochromatic field through thesuperposition of basic plane waves. Transmitted light image with partially coherent illumination is performed by the sourceintegration method. Main results. The results of simulation for the pointspreadfunction are demonstrating the level of the birefringence influence on the image quality. In the presence of the wave aberration about 0.098 of the wavelength, the wave energy loss in the center of the Airy disk with an average birefringence of 4 nm/cm was 8%, and at 16 nm/cm it reached 30%. The calculation of the pointspreadfunction for a real sample of fluorite is given. The central peak of the PSF without birefringence was 0.722, with regard to birefringence it was equal to 0.701. Practical significance. The findings can be used in the development of photolithographic lenses, as well as for the manufacturing of any other optical systems that require consideration of the polarization properties of the materials.

Purpose: Spatial resolution in digital breast tomosynthesis (DBT) is affected by inherent/binned detector resolution, oblique entry of x-rays, and focal spot size/motion; the limited angular range further limits spatial resolution in the depth-direction. While DBT is being widely adopted clinically, imaging performance metrics and quality control protocols have not been standardized. AAPM Task Group 245 on Tomosynthesis Quality Control has been formed to address this deficiency. Methods: Methods of measuring spatial resolution are evaluated using two prototype quality control phantoms for DBT. Spatial resolution in the detector plane is measured in projection and reconstruction domains using edge-spread function (ESF), point-spreadfunction (PSF) and modulation transfer function (MTF). Spatial resolution in the depth-direction and effective slice thickness are measured in the reconstruction domain using slice sensitivity profile (SSP) and artifact spread function (ASF). An oversampled PSF in the depth-direction is measured using a 50 µm angulated tungsten wire, from which the MTF is computed. Object-dependent PSF is derived and compared with ASF. Sensitivity of these measurements to phantom positioning, imaging conditions and reconstruction algorithms is evaluated. Results are compared from systems of varying acquisition geometry (9–25 projections over 15–60°). Dependence of measurements on feature size is investigated. Results: Measurements of spatial resolution using PSF and LSF are shown to depend on feature size; depth-direction spatial resolution measurements are shown to similarly depend on feature size for ASF, though deconvolution with an object function removes feature size-dependence. A slanted wire may be used to measure oversampled PSFs, from which MTFs may be computed for both in-plane and depth-direction resolution. Conclusion: Spatial resolution measured using PSF is object-independent with sufficiently small object; MTF is object

Full Text Available O presente estudo consiste em uma avaliação qualitativa de satisfação de usuários em áreas cobertas pelo Programa de Saúde da Família/PSF, situadas em cinco municípios da Bahia. Foram consideradas nesta avaliação as seguintes dimensões: cognitiva, relacional, organizacional e profissional; vistas também sob o ponto de vista das equipes de saúde da família. Tendo em vista as críticas apontadas pela literatura quanto às limitações metodológicas em estudos que avaliam a satisfação de usuários, notadamente vieses ligados à desejabilidade social ou à redução do processo subjetivo de avaliação a respostas do tipo sim/não em questionários fechados, privilegiamos neste caso estratégias metodológicas de cunho etnográfico. A partir da técnica de grupos focais, os usuários expressaram sua percepção sobre o programa e os serviços oferecidos pelas equipes, ao tempo que revelavam suas necessidades e expectativas de satisfação das mesmas.The present study consists of a qualitative evaluation of user satisfaction in the areas covered by the Family Health Program, which are located within five municipalities of Salvador (Bahia, Brazil. During the evaluation, the following dimensions were taken into account: cognitive, relational, organizational and professional; the family health teams' point of view was also considered. Owing to criticism directed in the literature towards the methodological limitations of studies undertaking to evaluate user satisfaction, in particular biases linked to social expectations or to the reduction of the subjective process of evaluation through yes/no-type answers to sets of closed questions, we favored in this case methodological strategies that bear the hallmark of ethnography. Through the focal groups technique, the users have been able to express their perception of the program and of the services offered by the teams while they were disclosing their needs and the satisfactions they

Purpose To design and develop ultra-short echo time k-space sampling schemes, radial-cones, which enable high sampling efficiency while maintaining compatibility with parallel imaging and compressed sensing reconstructions. Theory and Methods Radial-cones is a trajectory which samples 3D k-space utilizing a single base cone distributed along radial dimensions through a cost function based optimization. Trajectories were generated for highly undersampled, short readout sampling and compared to 3D radial sampling in pointspreadfunction (PSF) analysis, digital and physical phantoms, and initial human volunteers. Parallel imaging reconstructions were evaluated with and without the use of compressed sensing based regularization. Results Compared to 3D radial sampling, radial-cones reduced the peak value and energy of PSF aliasing. In both digital and physical phantoms, this improved sampling behavior corresponded to a reduction in the root-mean square error with a further reduction utilizing compressed sensing. A slight increase in noise and corresponding increase in apparent resolution was observed with radial-cones. In in-vivo feasibility testing, radial-cones reconstructed images has markedly lower apparent artifacts. Ultimate gains in imaging performance were limited by off-resonance blurring. Conclusion Radial-cones is an efficient Non-Cartesian sampling scheme enabling short echo readout with a high level of compatibility with parallel imaging and compressed sensing. PMID:27017991

Full Text Available In many undersampled imaging systems, spatial integration from the individual detector elements is the dominant component of the system pointspreadfunction (PSF. Conventional focal plane arrays (FPAs utilize square detector elements with a nearly 100% fill factor, where fill factor is defined as the fraction of the detector element area that is active in light detection. A large fill factor is generally considered to be desirable because more photons are collected for a given pitch, and this leads to a higher signal-to-noise-ratio (SNR. However, the large active area works against super-resolution (SR image restoration by acting as an additional low pass filter in the overall PSF when modeled on the SR sampling grid. A high fill factor also tends to increase blurring from pixel cross-talk. In this paper, we study the impact of FPA detector-element shape and fill factor on SR. A detailed modulation transfer function analysis is provided along with a number of experimental results with both simulated data and real data acquired with a midwave infrared (MWIR imaging system. We demonstrate the potential advantage of low fill factor detector elements when combined with SR image restoration. Our results suggest that low fill factor circular detector elements may be the best choice. New video results are presented using robust adaptive Wiener filter SR processing applied to data from a commercial MWIR imaging system with both high and low detector element fill factors.

Several groups have been developing X-ray microscopes for studies of biological and materials specimens at suboptical resolution. The XIA Scanning Transmission X-ray Microscope at Brookhaven National Laboratory has achieved 55 nm Rayleigh resolution, and is limited by the 45 nm finest zone width of the zone plate used to focus the X-rays. In principle, features as small as half the outermost zone width, or 23 nm, can be observed in the microscope, though with reduced contrast in the image. One approach to recover the object from the image is to deconvolve the image with the PointSpreadFunction (PSF) of the optic system. Towards this end, the magnitude of the Fourier transform of the PSF, the Modulation Transfer Function, has been experimentally determined and agrees reasonably well with the calculations using the known parameters of the microscope. To minimize artifacts in the deconvolved images, large signal-to-noise ratios are required in the original image, and high frequency filters can be used to reduce the noise at the expense of resolution. In this way the authors are able to recover the original contrast of high resolution features in the images

The optical components of the Swift Gamma Ray Burst Explorer X-ray Telescope (XRT), consisting of the JET-X spare flight mirror and a charge coupled device of the type used in the EPIC program, were used in a re-calibration study carried out at the Panter facility, which is part of the Max Planck Institute for Extraterrestrial Physics. The objective of this study was to check the focal length and the off axis performance of the mirrors and to show that the half energy width (HEW) of the on-axis pointspreadfunction (PSF) was of the order of 16 arcsec at 1.5 keV (Nucl. Instr. and Meth. A 488 (2002) 543; SPIE 4140 (2000) 64) and that a centroiding accuracy better that 1 arcsec could be achieved within the 4 arcmin sampling area designated by the Burst Alert Telescope (Nucl. Instr. and Meth. A 488 (2002) 543). The centroiding accuracy of the Swift XRT's optical components was tested as a function of distance from the focus and off axis position of the PSF (Nucl. Instr. and Meth. A 488 (2002) 543). The presence ...

The 2020 Decadal technology survey is starting in 2018. Technology on the shelf at that time will help guide selection to future low risk and low cost missions. The Advanced Mirror Technology Development (AMTD) team has identified development priorities based on science goals and engineering requirements for Ultraviolet Optical near-Infrared (UVOIR) missions in order to contribute to the selection process. One key development identified was lightweight mirror fabrication and testing. A monolithic, stacked, deep core mirror was fused and replicated twice to achieve the desired radius of curvature. It was subsequently successfully polished and tested. A recently awarded second phase to the AMTD project will develop larger mirrors to demonstrate the lateral scaling of the deep core mirror technology. Another key development was rapid modeling for the mirror. One model focused on generating optical and structural model results in minutes instead of months. Many variables could be accounted for regarding the core, face plate and back structure details. A portion of a spacecraft model was also developed. The spacecraft model incorporated direct integration to transform optical path difference to PointSpreadFunction (PSF) and between PSF to modulation transfer function. The second phase to the project will take the results of the rapid mirror modeler and integrate them into the rapid spacecraft modeler.

We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or χ2 fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spreadfunction (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a χ2-based extraction of the data cube, with typical residuals of ˜5% due to imperfect models of the undersampled lenslet PSFs. The full two-dimensional residual of the χ2 extraction allows us to model and remove correlated read noise, dramatically improving CHARIS's performance. The χ2 extraction produces a data cube that has been deconvolved with the line-spread function and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS's software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.

In this paper, we propose a pointspreadfunction (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

We present a plan for calibration of the NeXT hard X-ray telescopes (HXT) at the synchrotron radiation facility, SPring-8. In hard X-rays, it is difficult for a laboratory-based beamline using a conventional X-ray source to provide sufficient capabilities for pre-flight high-precision calibration. Therefore, we plan to characterize the NeXT HXT at the SPring-8 beamline BL20B2. SPring-8 is one of the world's third-generation synchrotron radiation facilities. Measurements at BL20B2 have great advantages over those done with conventional sources, such as an extremely high flux, a larger beam with less divergence, and a selectable, narrow bandwidth covering the hard X-ray region from 8 to over 100 keV. The 16m-long experimental hutch has sufficient capability for characterization of the NeXT HXT (FL=12m). In the past, we have measured the PointSpreadFunction (PSF) and effective area of telescopes for balloon-borne hard X-ray imaging experiments (e.g. InFOCuS, SUMIT) at several energies from 20 to 60 keV. Furthermore, we have successfully established a tuning procedure to improve their image quality. We plan to measure the X-ray characteristics (PSF, effective area, stray light, and so on) of the NeXT HXT to build up the HXT response function.

A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the PointSpreadFunction (PSF) of the used optics, the integration time of the camera, and camera sensor specifications like pixel size, quantum efficiency and full well capacity. Effects of the read-out circuit of the camera are not incorporated. The model was evaluated with laser dazzle experiments on CCD cameras using a 532 nm CW laser dazzler and shows good agreement. For relatively low laser irradiance the model predicts the over-exposed laser spot area quite accurately and shows the cube root dependency of spot diameter on laser irradiance, caused by the PSF as demonstrated before for IR cameras. For higher laser power levels the laser induced spot diameter increases more rapidly than predicted, which probably can be attributed to scatter effects in the camera. Some first attempts to model scatter contributions, using a simple scatter power function f(θ), show good resemblance with experiments. Using this model, a tool is available which can assess the performance of observation sensor systems while being subjected to laser countermeasures.

Purpose: One of the benefits of photon counting (PC) detectors over energy integrating (EI) detectors is the absence of many additive noise sources, such as electronic noise and secondary quantum noise. The purpose of this work is to demonstrate that thresholding voltage gains to detect individual x rays actually generates an unexpected source of white noise in photon counters. Methods: To distinguish the two detector types, their pointspreadfunction (PSF) is interpreted differently. The PSF of the energy integrating detector is treated as a weighting function for counting x rays, while the PSF of the photon counting detector is interpreted as a probability. Although this model ignores some subtleties of real imaging systems, such as scatter and the energy-dependent amplification of secondary quanta in indirect-converting detectors, it is useful for demonstrating fundamental differences between the two detector types. From first principles, the optical transfer function (OTF) is calculated as the continuous Fourier transform of the PSF, the noise power spectra (NPS) is determined by the discrete space Fourier transform (DSFT) of the autocovariance of signal intensity, and the detective quantum efficiency (DQE) is found from combined knowledge of the OTF and NPS. To illustrate the calculation of the transfer functions, the PSF is modeled as the convolution of a Gaussian with the product of rect functions. The Gaussian reflects the blurring of the x-ray converter, while the rect functions model the sampling of the detector. Results: The transfer functions are first calculated assuming outside noise sources such as electronic noise and secondary quantum noise are negligible. It is demonstrated that while OTF is the same for two detector types possessing an equivalent PSF, a frequency-independent (i.e., ''white'') difference in their NPS exists such that NPS PC ≥NPS EI and hence DQE PC ≤DQE EI . The necessary and sufficient condition for equality is that the PSF

We describe a positron tomograph using a single ring of 600 close-packed 3 mm wide bismuth germanate (BGO) crystals coupled to 14 mm phototubes. The phototube preamplifier circuit derives a timing pulse from the first photoelectron, and sends it to address and coincidence circuits only if the integrated pulse height is within a pre-set window. The timing delays and pulse height windows for all 600 detectors and the coincidence timing windows are computer adjustable. An orbiting positron source is used for transmission measurements and a look-up table is used to reject scattered and random coincidences that do not pass through the source. Data can be acquired using a stationary mode for 1.57 mm lateral sampling or the two-position clam sampling mode for 0.79 mm lateral sampling. High maximum data rates are provided by 45 parallel coincidence circuits and 4 parallel histogram memory units. With two-position sampling and 1.57 mm bins, the reconstructed pointspreadfunction (PSF) of a 0.35 mm diam 22 Na wire source at the center of the tomograph is circular with 2.9 mm full-width at half-maximum (fwhm) and the PSF at a distance of 8 cm from the center is elliptical with a radial fwhm of 4.0 mm and tangential fwhm of 3.0 mm. 12 refs., 6 figs., 3 tabs

This research images trapped atoms in three dimensions, utilizing light field imaging. Such a system is of interest in the development of atom interferometer accelerometers in dynamic systems where strictly defined focal planes may be impractical. In this research, a light field microscope was constructed utilizing a Lytro Development Kit micro lens array and sensor. It was used to image fluorescing rubidium atoms in a magneto optical trap. The three-dimensional (3D) volume of the atoms is reconstructed using a modeled pointspreadfunction (PSF), taking into consideration that the low magnification (1.25) of the system changed typical assumptions used in the optics model for the PSF. The 3D reconstruction is analyzed with respect to a standard off-axis fluorescence image. Optical axis separation between two atom clouds is measured to a 100 μm accuracy in a 3 mm deep volume, with a 16 μm in-focus standard resolution with a 3.9 mm by 3.9 mm field of view. Optical axis spreading is observed in the reconstruction and discussed. The 3D information can be used to determine properties of the atom cloud with a single camera and single image, and can be applied anywhere 3D information is needed but optical access may be limited.

The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC pointspreadfunction (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe.

Full Text Available Considering the high cost of dedicated small-animal positron emission tomography/computed tomography (PET/CT, an acceptable alternative in many situations might be clinical PET/CT. However, spatial resolution and image quality are of concern. The utility of clinical PET/CT for small-animal research and image quality improvements from super-resolution (spatial subsampling were investigated. National Electrical Manufacturers Association (NEMA NU 4 phantom and mouse data were acquired with a clinical PET/CT scanner, as both conventional static and stepped scans. Static scans were reconstructed with and without pointspreadfunction (PSF modeling. Stepped images were postprocessed with iterative deconvolution to produce super-resolution images. Image quality was markedly improved using the super-resolution technique, avoiding certain artifacts produced by PSF modeling. The 2 mm rod of the NU 4 phantom was visualized with high contrast, and the major structures of the mouse were well resolved. Although not a perfect substitute for a state-of-the-art small-animal PET/CT scanner, a clinical PET/CT scanner with super-resolution produces acceptable small-animal image quality for many preclinical research studies.

Chromatin is a multiscale dynamic architecture that acts as a template for many biochemical processes such as transcription and DNA replication. Recent developments such as Hi-C technology enable an identification of chromatin interactions across an entire genome. However, a single cell dynamic view of chromatin organization is far from understood. We discuss a new live cell imaging technique to probe the dynamics of the nucleus at a single cell level using single-walled carbon nanotubes (SWNTs). SWNTs are non-perturbing rigid rods (diameter of 1 nm and length of roughly 100 nm) that fluoresce in the near infrared region. Due to their high aspect ratio, they can diffuse in tight spaces and report on the architecture and dynamics of the nucleoplasm. We develop 3D imaging and tracking of SWNTs in the volume of the nucleus using double helix pointspreadfunction microscopy (DH-PSF) and discuss the capabilities of the DH-PSF for inferring the 3D orientation of nanotubes based on vectorial diffraction theory.

The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, pointspreadfunction (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

The pointspreadfunction (PSF) is an important measure of spatial resolution in CCDs for point-like objects, since it affects image quality and spectroscopic resolution. We present new data and theoretical developments for lateral charge diffusion in thick, fully-depleted charge-coupled devices (CCDs) developed at Lawrence Berkeley National Laboratory (LBNL). Because they can be over-depleted, the LBNL devices have no field-free region and diffusion is controlled through the application of an external bias voltage. We give results for a 3512 x 3512 format, 10.5 {micro}m pixel back-illuminated p-channel CCD developed for the SuperNova/Acceleration Probe (SNAP), a proposed satellite-based experiment designed to study dark energy. The PSF was measured at substrate bias voltages between 3 V and 115 V. At a bias voltage of 115 V, we measure an rms diffusion of 3.7 {+-} 0.2 {micro}m. Lateral charge diffusion in LBNL CCDs will meet the SNAP requirements.

The pointspreadfunction (PSF) is an important measure ofspatial resolution in CCDs for point-like objects, since it can affectuse in imaging and spectroscopic applications. We present new data andtheoretical developments in the study of lateral charge diffusion inthick, fully-depleted charge-coupled devices (CCDs) developed at LawrenceBerkeley National Laboratory (LBNL). Because they are fully depleted, theLBNL devices have no field-free region, and diffusion can be controlledthrough the application of an external bias voltage. We give results fora 3512x3512 format, 10.5 ?m pixel back-illuminated p-channel CCDdeveloped for the SuperNova/ Acceleration Probe (SNAP), a proposedsatellite-based experiment designed to study dark energy. The PSF wasmeasured at substrate bias voltages between 3 V and 115 V. At a biasvoltage of 115V, we measure an rms diffusion of 3.7 +- 0.2 ?m. Lateralcharge diffusion in LBNL CCDs is thus expected to meet the SNAPrequirements.

A likelihood-based method for measuring weak gravitational lensing shear in deep galaxy surveys is described and applied to the Canada-France-Hawaii Telescope (CFHT) Lensing Survey (CFHTLenS). CFHTLenS comprises 154 deg2 of multi-colour optical data from the CFHT Legacy Survey, with lensing measurements being made in the i' band to a depth i'AB noise ratio νSN ≳ 10. The method is based on the lensfit algorithm described in earlier papers, but here we describe a full analysis pipeline that takes into account the properties of real surveys. The method creates pixel-based models of the varying pointspreadfunction (PSF) in individual image exposures. It fits PSF-convolved two-component (disc plus bulge) models to measure the ellipticity of each galaxy, with Bayesian marginalization over model nuisance parameters of galaxy position, size, brightness and bulge fraction. The method allows optimal joint measurement of multiple, dithered image exposures, taking into account imaging distortion and the alignment of the multiple measurements. We discuss the effects of noise bias on the likelihood distribution of galaxy ellipticity. Two sets of image simulations that mirror the observed properties of CFHTLenS have been created to establish the method's accuracy and to derive an empirical correction for the effects of noise bias.

In this paper, a comprehensive set of techniques for quality control and authentication of packaged integrated circuits (IC) using terahertz (THz) time-domain spectroscopy (TDS) is developed. By material characterization, the presence of unexpected materials in counterfeit components is revealed. Blacktopping layers are detected using THz time-of-flight tomography, and thickness of hidden layers is measured. Sanded and contaminated components are detected by THz reflection-mode imaging. Differences between inside structures of counterfeit and authentic components are revealed through developing THz transmission imaging. For enabling accurate measurement of features by THz transmission imaging, a novel resolution enhancement technique (RET) has been developed. This RET is based on deconvolution of the THz image and the THz pointspreadfunction (PSF). The THz PSF is mathematically modeled through incorporating the spectrum of the THz imaging system, the axis of propagation of the beam, and the intensity extinction coefficient of the object into a Gaussian beam distribution. As a result of implementing this RET, the accuracy of the measurements on THz images has been improved from 2.4 mm to 0.1 mm and bond wires as small as 550 μm inside the packaging of the ICs are imaged.

We present the results of a study to optimize the principal component analysis (PCA) algorithm for planet detection, a new algorithm complementing angular differential imaging and locally optimized combination of images (LOCI) for increasing the contrast achievable next to a bright star. The stellar pointspreadfunction (PSF) is constructed by removing linear combinations of principal components, allowing the flux from an extrasolar planet to shine through. The number of principal components used determines how well the stellar PSF is globally modeled. Using more principal components may decrease the number of speckles in the final image, but also increases the background noise. We apply PCA to Fomalhaut Very Large Telescope NaCo images acquired at 4.05 μm with an apodized phase plate. We do not detect any companions, with a model dependent upper mass limit of 13-18 M {sub Jup} from 4-10 AU. PCA achieves greater sensitivity than the LOCI algorithm for the Fomalhaut coronagraphic data by up to 1 mag. We make several adaptations to the PCA code and determine which of these prove the most effective at maximizing the signal-to-noise from a planet very close to its parent star. We demonstrate that optimizing the number of principal components used in PCA proves most effective for pulling out a planet signal.

This contribution is another opportunity to acknowledge the influence of Roger Maynard on our research work when he pushed one of us (ACB) to explore the field of waves propagating in complex media rather than limiting ourselves to the wavelength scale of thermal waves or near field phenomena. Optical tomography is used for imaging in-depth scattering media such as biological tissues. Optical coherence tomography (OCT) plays an important role in imaging biological samples. Coupling OCT with adaptive optics (AO) in order to correct eye aberrations has led to cellular imaging of the retina. By using our approach called Full-Field OCT (FFOCT) we show that, with spatially incoherent illumination, the width of the point-spreadfunction (PSF) that governs the resolution is not affected by aberrations that induce only a reduction of the signal level. We will describe our approach by starting with the PSF experimental data followed by a simple theoretical analysis, and numerical calculations. Finally full images obtained through or inside scattering and aberrating media will be shown.

A new procedure, designed to remove foreground stars from galaxy proviles is presented here. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well-known stellar photometry packages, DAOPhot (Stetson 1987). Major steps in my procedure are: (1) automatic construction of an empirical 2D pointspreadfunction from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since: (a) the most suitable stars are selected automatically from the image for the PSF fit; (b) after star-removal an intelligent and automatic procedure removes any possible residuals; (c) unlimited number of images can be cleaned in one run without any user interaction whatsoever. (SECTION: Computing and Data Analysis)

The pointspreadfunction (PSF) is an important measure of spatial resolution in CCDs for point-like objects, since it can affect use in imaging and spectroscopic applications. We present new data and theoretical developments in the study of lateral charge diffusion in thick, fully-depleted charge-coupled devices (CCDs) developed at Lawrence Berkeley National Laboratory (LBNL). Because they are fully depleted, the LBNL devices have no field-free region, and diffusion can be controlled through the application of an external bias voltage. We give results for a 3512x3512 format, 10.5 ?m pixel back-illuminated p-channel CCD developed for the SuperNova/Acceleration Probe (SNAP), a proposed satellite-based experiment designed to study dark energy. The PSF was measured at substrate bias voltages between 3 V and 115 V. At a bias voltage of 115V, we measure an rms diffusion of 3.7 ± 0.2 (micro)m. Lateral charge diffusion in LBNL CCDs is thus expected to meet the SNAP requirements

We use data from the IAC Stripe82 Legacy Project to study the surface photometry of 22 nearby, face-on to moderately inclined spiral galaxies. The reprocessed and combined Stripe 82 g',r' and I' images allow us to probe the galaxy down to 29-30 r'-magnitudes arcsec-2 and thus reach into the very faint outskirts of the galaxies. Truncations are found in three galaxies. An additional 15 galaxies are found to have an apparent extended stellar halo. Simulations show that the scattering of light from the inner galaxy by the pointspreadfunction (PSF) can produce faint structures resembling haloes, but this effect is insufficient to fully explain the observed haloes. The presence of these haloes and of truncations is mutually exclusive, and we argue that the presence of a stellar halo and/or light scattered by the PSF can hide truncations. Furthermore, we find that the onset of the stellar halo and the truncations scales tightly with galaxy size. Interestingly, the fraction of light does not correlate with dynamic mass. Nineteen galaxies are found to have breaks in their profiles, the radius of which also correlates with galaxy size.

We present the first results from the polarimetry mode of the Gemini Planet Imager (GPI), which uses a new integral field polarimetry architecture to provide high contrast linear polarimetry with minimal systematic biases between the orthogonal polarizations. We describe the design, data reduction methods, and performance of polarimetry with GPI. Point-spreadfunction (PSF) subtraction via differential polarimetry suppresses unpolarized starlight by a factor of over 100, and provides sensitivity to circumstellar dust reaching the photon noise limit for these observations. In the case of the circumstellar disk around HR 4796A, GPI's advanced adaptive optics system reveals the disk clearly even prior to PSF subtraction. In polarized light, the disk is seen all the way in to its semi-minor axis for the first time. The disk exhibits surprisingly strong asymmetry in polarized intensity, with the west side ≳ 9 times brighter than the east side despite the fact that the east side is slightly brighter in total intensity. Based on a synthesis of the total and polarized intensities, we now believe that the west side is closer to us, contrary to most prior interpretations. Forward scattering by relatively large silicate dust particles leads to the strong polarized intensity on the west side, and the ring must be slightly optically thick in order to explain the lower brightness in total intensity there. These findings suggest that the ring is geometrically narrow and dynamically cold, perhaps shepherded by larger bodies in the same manner as Saturn's F ring

We present the first results from the polarimetry mode of the Gemini Planet Imager (GPI), which uses a new integral field polarimetry architecture to provide high contrast linear polarimetry with minimal systematic biases between the orthogonal polarizations. We describe the design, data reduction methods, and performance of polarimetry with GPI. Point-spreadfunction (PSF) subtraction via differential polarimetry suppresses unpolarized starlight by a factor of over 100, and provides sensitivity to circumstellar dust reaching the photon noise limit for these observations. In the case of the circumstellar disk around HR 4796A, GPI's advanced adaptive optics system reveals the disk clearly even prior to PSF subtraction. In polarized light, the disk is seen all the way in to its semi-minor axis for the first time. The disk exhibits surprisingly strong asymmetry in polarized intensity, with the west side ≳ 9 times brighter than the east side despite the fact that the east side is slightly brighter in total intensity. Based on a synthesis of the total and polarized intensities, we now believe that the west side is closer to us, contrary to most prior interpretations. Forward scattering by relatively large silicate dust particles leads to the strong polarized intensity on the west side, and the ring must be slightly optically thick in order to explain the lower brightness in total intensity there. These findings suggest that the ring is geometrically narrow and dynamically cold, perhaps shepherded by larger bodies in the same manner as Saturn's F ring.

With the advent of flip-chips, internal debug tools need to image the active regions of devices through their silicon substrates. Infrared (IR) optics can 'see' through silicon, but accurate navigation to a particular node is challenged because IR resolution is often lower than the feature size to be probed. To meet this accuracy requirement, we have developed an automated sub-resolution alignment of a device's computer-aided design (CAD) to its through silicon IR image. Automated image alignment is not straightforward because CAD and IR images differ significantly in magnification, rotation, intensity, and resolution, causing standard alignment algorithms to fail. The light diffraction of the optical system blurs and distorts the shape and size of features, causing both edge-based and intensity-based cross-correlation techniques to fail. The alignment methodology we present, consists of pre-processing (equalization) of the two images, followed by sub-resolution offset computation along with x-y confidence factors. We apply a modeled pointspreadfunction (PSF) of the optical system to the CAD image to increase its resemblance to the optical image. The application of the PSF is important in resolution-equalization, and becomes critical if 'ghosting' is present in the optical image. Using our alignment algorithm which combines image equalization, over-sampling, and cross-correlation, we demonstrate the ability to achieve 0.1 micron placement accuracy with a 1 micron resolution optical system.

We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spreadfunction (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

SPECT studies with 123 I-ioflupane facilitate the diagnosis of Parkinson’s disease (PD). The effect on quantification of image degradations has been extensively evaluated in human studies but their impact on studies of experimental PD models is still unclear. The aim of this work was to assess the effect of compensating for the degrading phenomena on the quantification of small animal SPECT studies using 123 I-ioflupane. This assessment enabled us to evaluate the feasibility of quantitatively detecting small pathological changes using different reconstruction methods and levels of compensation for the image degrading phenomena. Monte Carlo simulated studies of a rat phantom were reconstructed and quantified. Compensations for pointspreadfunction (PSF), scattering, attenuation and partial volume effect were progressively included in the quantification protocol. A linear relationship was found between calculated and simulated specific uptake ratio (SUR) in all cases. In order to significantly distinguish disease stages, noise-reduction during the reconstruction process was the most relevant factor, followed by PSF compensation. The smallest detectable SUR interval was determined by biological variability rather than by image degradations or coregistration errors. The quantification methods that gave the best results allowed us to distinguish PD stages with SUR values that are as close as 0.5 using groups of six rats to represent each stage. (paper)

The development and demonstration of a "polished panel" optical receiver concept on the 34 meter research antenna of the Deep Space Network (DSN) has been the subject of recent papers. This concept would enable simultaneous reception of optical and microwave signals by retaining the original shape of the main reflector for microwave reception, but with the aluminum panels polished to high reflectivity to enable focusing of optical signal energy as well. A test setup has been installed on the DSN's 34 meter research antenna at Deep Space Station 13 (DSS-13) of NASA's Goldstone Communications Complex in California, and preliminary experimental results have been obtained. This paper describes the results of our latest efforts to improve the point-spreadfunction (PSF) generated by a custom polished panel, in an attempt to reduce the dimensions of the PSF, thus enabling more precise tracking and improved detection performance. The design of the new mechanical support structure and its operation are described, and the results quantified in terms of improvements in collected signal energy and optical communications performance, based on data obtained while tracking the planet Jupiter with the 34 meter research antenna at DSS-13.

We present our image processing pipeline that corrects the systematics introduced by the point-spreadfunction (PSF). Using this pipeline, we processed Sloan Digital Sky Survey (SDSS) DR7 imaging data in r band and generated a galaxy catalog containing the shape information. Based on our shape measurements of the galaxy images from SDSS DR7, we extract the galaxy–galaxy (GG) lensing signals around foreground spectroscopic galaxies binned in different luminosities and stellar masses. We estimated the systematics, e.g., selection bias, PSF reconstruction bias, PSF dilution bias, shear responsivity bias, and noise rectification bias, which in total is between −9.1% and 20.8% at 2 σ levels. The overall GG lensing signals we measured are in good agreement with Mandelbaum et al. The reduced χ 2 between the two measurements in different luminosity bins are from 0.43 to 0.83. Larger reduced χ 2 from 0.60 to 1.87 are seen for different stellar mass bins, which is mainly caused by the different stellar mass estimator. The results in this paper with higher signal-to-noise ratio are due to the larger survey area than SDSS DR4, confirming that more luminous/massive galaxies bear stronger GG lensing signals. We divide the foreground galaxies into red/blue and star-forming/quenched subsamples and measure their GG lensing signals. We find that, at a specific stellar mass/luminosity, the red/quenched galaxies have stronger GG lensing signals than their counterparts, especially at large radii. These GG lensing signals can be used to probe the galaxy–halo mass relations and their environmental dependences in the halo occupation or conditional luminosity function framework.

We present our image processing pipeline that corrects the systematics introduced by the point-spreadfunction (PSF). Using this pipeline, we processed Sloan Digital Sky Survey (SDSS) DR7 imaging data in r band and generated a galaxy catalog containing the shape information. Based on our shape measurements of the galaxy images from SDSS DR7, we extract the galaxy–galaxy (GG) lensing signals around foreground spectroscopic galaxies binned in different luminosities and stellar masses. We estimated the systematics, e.g., selection bias, PSF reconstruction bias, PSF dilution bias, shear responsivity bias, and noise rectification bias, which in total is between −9.1% and 20.8% at 2 σ levels. The overall GG lensing signals we measured are in good agreement with Mandelbaum et al. The reduced χ {sup 2} between the two measurements in different luminosity bins are from 0.43 to 0.83. Larger reduced χ {sup 2} from 0.60 to 1.87 are seen for different stellar mass bins, which is mainly caused by the different stellar mass estimator. The results in this paper with higher signal-to-noise ratio are due to the larger survey area than SDSS DR4, confirming that more luminous/massive galaxies bear stronger GG lensing signals. We divide the foreground galaxies into red/blue and star-forming/quenched subsamples and measure their GG lensing signals. We find that, at a specific stellar mass/luminosity, the red/quenched galaxies have stronger GG lensing signals than their counterparts, especially at large radii. These GG lensing signals can be used to probe the galaxy–halo mass relations and their environmental dependences in the halo occupation or conditional luminosity function framework.

Night-side hemisphere of Venus exhibits dark and bright regions as a result of spatially inhomogeneous cloud opacity which is illuminated by infrared radiation from deeper atmosphere. The 2-μm camera (IR2) onboard Akatsuki, Japan's Venus Climate Orbiter, is equipped with three narrow-band filters (1.735, 2.26, and 2.32 μm) to image Venus night-side disk in well-known transparency windows of CO2 atmosphere (Allen and Crawford 1984). In general, a cloud feature appears brightest when it is in the disk center and becomes darker as the zenith angle of emergent light increases. Such limb darkening was observed with Galileo/NIMS and mathematically approximated (Carlson et al., 1993). Limb-darkening correction helps to identify branches, in a 1.74-μm vs. 2.3-μm radiances scatter plot, each of which corresponds to a group of aerosols with similar properties. We analyzed Akatsuki/IR2 images to characterize the limb darkening for three night-side filters.There is, however, contamination from the intense day-side disk blurred by IR2's pointspreadfunction (PSF). It is found that infrared light can be multiplly reflected within the Si substrate of IR2 detector (1024x1024 pixels PtSi array), causing elongated tail in the actual PSF. We treated this in two different ways. One is to mathematically approximate the PSF (with a combination of modified Lorentz functions) and another is to differentiate 2.26-μm image from 2.32-μm image so that the blurred light pattern can directly be obtained. By comparing results from these two methods, we are able to reasonablly clean up the night-side images and limb darkening is extracted. Physical interpretation of limb darkening, as well as "true" time variations of cloud brightness will be presented/discussed.

We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spreadfunction (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

Full Text Available OBJECTIVES: This study aims to detect the prevalence of common mental disorders among patients seen by doctors at family health program units in Petrópolis-RJ, and to establish their nosological profile. METHOD: The population of the study included all 18 to 65-year-old patient who attended any family health program units included in the study during a 30-day period, between August and December 2002 (n = 714. The prevalence of common mental disorders was assessed using the General Health Questionnaire, 12 item version. In order to establish the nosological profile, the Composite International Diagnostic Interview was administered to all common mental disorders positive patients who accepted to return (n = 215. RESULTS: At the cut-off point of 2/3 the common mental disorders prevalence was 56% and for 4/5, it was 33%. The most frequent nosological categories found among common mental disorders positive patients were depression and anxiety categories along with posttraumatic stress disorder, somatoform pain disorder and dissociative disorders. There was a high frequency of comorbidity, especially between anxiety, depression, somatoform and dissociative disorders. CONCLUSIONS: The common mental disorders prevalence and the nosological profile found in FHP were similar to those of other primary care studies in Brazil, but some disorders (posttraumatic stress disorder, somatoform pain disorder and dissociative disorders that had not been previously studied in this context were also very frequent. The high common mental disorders prevalence found reinforces the urgent need for systematic inclusion of this level of care in mental health assistance planning.OBJETIVOS: Conhecer a prevalência de transtornos mentais comuns na clientela atendida no Programa de Saúde da Família (PSF em Petrópolis-RJ e seu perfil nosológico. MÉTODO: Foram estudados todos os pacientes entre 18 e 65 anos atendidos no período de 30 dias, entre agosto e dezembro de 2002

Highlights: • An optical sensor membrane is prepared by TMPyP and PNaSS-grafted PSF membrane. • The optical sensor membrane shows enhanced sorption for cadmium(II). • Visual and spectrophotometric detection can be achieved. • The functional membrane exhibits good stability and reusability. - Abstract: In this study, an optical sensor membrane was prepared for sorption and detection of cadmium(II) (Cd(II)) in aqueous solution. A polyanion, poly(sodium 4-styrenesulfonate) (PNaSS), was grafted onto the chloromethylated polysulfone (CMPSF) microporous membrane via surface-initiated ATRP. 5,10,15,20-tetrakis(4-N-methylpyridyl) porphyrin p-toluenesulfonate (TMPyP) was immobilized onto the PNaSS-grafted polysulfone (PSF-PNaSS) membrane through electrostatic interaction. The TMPyP-functionalized membrane exhibited an enhanced sorption for, and distinct color and spectral response to cadmium(II) (Cd(II)) in aqueous solution. Larger immobilization capacity of TMPyP on the membrane led to stronger sorption for Cd(II), and smaller one made the optical sensor have a faster (in minutes) and more sensitive response to the ion. The detection limit study indicated that the functional membrane with proper amount of TMPyP (<0.5 mg/g) could still have color and spectral response to Cd(II) solutions at an extreme low concentration (10{sup −4} mg/L). The optical sensor membrane exhibited good stability and reusability which made it efficient for various sorptive removal and detection applications.

Current methods of image-based thickness measurement in thin sheet structures utilize second derivative zero crossings to locate the layer boundaries. It is generally acknowledged that the nonzero width of the pointspreadfunction (PSF) limits the accuracy of this measurement procedure. We propose a model-based method that strongly reduces PSF-induced bias by incorporating the PSF into the thickness estimation method. We estimated the bias in thickness measurements in simulated thin sheet images as obtained from second derivative zero crossings. To gain insight into the range of sheet thickness where our method is expected to yield improved results, sheet thickness was varied between 0.15 and 1.2 mm with an assumed PSF as present in the high-resolution modes of current computed tomography (CT) scanners [full width at half maximum (FWHM) 0.5-0.8 mm]. Our model-based method was evaluated in practice by measuring layer thickness from CT images of a phantom mimicking two parallel cartilage layers in an arthrography procedure. CT arthrography images of cadaver wrists were also evaluated, and thickness estimates were compared to those obtained from high-resolution anatomical sections that served as a reference. The thickness estimates from the simulated images reveal that the method based on second derivative zero crossings shows considerable bias for layers in the submillimeter range. This bias is negligible for sheet thickness larger than 1 mm, where the size of the sheet is more than twice the FWHM of the PSF but can be as large as 0.2 mm for a 0.5 mm sheet. The results of the phantom experiments show that the bias is effectively reduced by our method. The deviations from the true thickness, due to random fluctuations induced by quantum noise in the CT images, are of the order of 3% for a standard wrist imaging protocol. In the wrist the submillimeter thickness estimates from the CT arthrography images correspond within 10% to those estimated from the anatomical

Here we present the results of various approaches to measure accurate colours and photometric redshifts (photo-z) from wide-field imaging data. We use data from the Canada-France-Hawaii Telescope Legacy Survey which have been re-processed by the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) team in order to carry out a number of weak gravitational lensing studies. An emphasis is put on the correction of systematic effects in the photo-z arising from the different pointspreadfunctions (PSFs) in the five optical bands. Different ways of correcting these effects are discussed and the resulting photo-z accuracies are quantified by comparing the photo-z to large spectroscopic redshift (spec-z) data sets. Careful homogenization of the PSF between bands leads to increased overall accuracy of photo-z. The gain is particularly pronounced at fainter magnitudes where galaxies are smaller and flux measurements are affected more by PSF effects. We discuss ways of defining more secure subsamples of galaxies as well as a shape- and colour-based star-galaxy separation method, and we present redshift distributions for different magnitude limits. We also study possible re-calibrations of the photometric zero-points (ZPs) with the help of galaxies with known spec-z. We find that if PSF effects are properly taken into account, a re-calibration of the ZPs becomes much less important suggesting that previous such re-calibrations described in the literature could in fact be mostly corrections for PSF effects rather than corrections for real inaccuracies in the ZPs. The implications of this finding for future surveys like the Kilo Degree Survey (KiDS), Dark Energy Survey (DES), Large Synoptic Survey Telescope or Euclid are mixed. On the one hand, ZP re-calibrations with spec-z values might not be as accurate as previously thought. On the other hand, careful PSF homogenization might provide a way out and yield accurate, homogeneous photometry without the need for full

We measure the weak lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey (DES). This pathfinder study is meant to (1) validate the Dark Energy Camera (DECam) imager for the task of measuring weak lensing shapes, and (2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, pointspreadfunction (PSF) modelling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well behaved, but the modelling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting Navarro-Frenk-White profiles to the clusters in this study, we determine weak lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1°(approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky.

The objective of the study was to evaluate state-of-the-art clinical PET/CT technology in performing static and dynamic imaging of several mice simultaneously. A mouse-sized phantom was imaged mimicking simultaneous imaging of three mice with computation of recovery coefficients (RCs) and spillover ratios (SORs). Fifteen mice harbouring abdominal or subcutaneous tumours were imaged on clinical PET/CT with pointspreadfunction (PSF) reconstruction after injection of [18F]fluorodeoxyglucose or [18F]fluorothymidine. Three of these mice were imaged alone and simultaneously at radial positions -5, 0 and 5 cm. The remaining 12 tumour-bearing mice were imaged in groups of 3 to establish the quantitative accuracy of PET data using ex vivo gamma counting as the reference. Finally, a dynamic scan was performed in three mice simultaneously after the injection of 68 Ga-ethylenediaminetetraacetic acid (EDTA). For typical lesion sizes of 7-8 mm phantom experiments indicated RCs of 0.42 and 0.76 for ordered subsets expectation maximization (OSEM) and PSF reconstruction, respectively. For PSF reconstruction, SOR air and SOR water were 5.3 and 7.5%, respectively. A strong correlation (r 2 = 0.97, p 2 = 0.98; slope = 0.89, p 2 = 0.96; slope = 0.62, p 68 Ga-EDTA dynamic acquisition. New generation clinical PET/CT can be used for simultaneous imaging of multiple small animals in experiments requiring high throughput and where a dedicated small animal PET system is not available. (orig.)

Using the 1-BM-C beamline at the Advanced Photon Source (APS), we have performed the initial indirect x - ray imaging point-spread-function (PSF) test of a unique 88-mm diameter YAG:Ce single crystal of only 100 - micron thickness. The crystal was bonded to a fiber optic plat e (FOP) for mechanical support and to allow the option for FO coupling to a large format camera. This configuration resolution was compared to that of self - supported 25-mm diameter crystals, with and without an Al reflective coating. An upstream monochromator was used to select 17-keV x-rays from the broadband APS bending magnet source of synchrotron radiation. The upstream , adjustable Mo collimators were then used to provide a series of x-ray source transverse sizes from 200 microns down to about 15-20 microns (FWHM) at the crystal surface. The emitted scintillator radiation was in this case lens coupled to the ANDOR Neo sCMOS camera, and the indirect x-ray images were processed offline by a MATLAB - based image processing program. Based on single Gaussian peak fits to the x-ray image projected profiles, we observed a 10.5 micron PSF. This sample thus exhibited superior spatial resolution to standard P43 polycrystalline phosphors of the same thickness which would have about a 100-micron PSF. Lastly, this single crystal resolution combined with the 88-mm diameter makes it a candidate to support future x-ray diffraction or wafer topography experiments.

The objective of this study was to explore the feasibility of harmonising performance for PET/CT systems equipped with time-of-flight (ToF) and resolution modelling/pointspreadfunction (PSF) technologies. A second aim was producing a working prototype of new harmonising criteria with higher contrast recoveries than current EARL standards using various SUV metrics. Four PET/CT systems with both ToF and PSF capabilities from three major vendors were used to acquire and reconstruct images of the NEMA NU2-2007 body phantom filled conforming EANM EARL guidelines. A total of 15 reconstruction parameter sets of varying pixel size, post filtering and reconstruction type, with three different acquisition durations were used to compare the quantitative performance of the systems. A target range for recovery curves was established such that it would accommodate the highest matching recoveries from all investigated systems. These updated criteria were validated on 18 additional scanners from 16 sites in order to demonstrate the scanners' ability to meet the new target range. Each of the four systems was found to be capable of producing harmonising reconstructions with similar recovery curves. The five reconstruction parameter sets producing harmonising results significantly increased SUVmean (25%) and SUVmax (26%) contrast recoveries compared with current EARL specifications. Additional prospective validation performed on 18 scanners from 16 EARL accredited sites demonstrated the feasibility of updated harmonising specifications. SUVpeak was found to significantly reduce the variability in quantitative results while producing lower recoveries in smaller (≤17 mm diameter) sphere sizes. Harmonising PET/CT systems with ToF and PSF technologies from different vendors was found to be feasible. The harmonisation of such systems would require an update to the current multicentre accreditation program EARL in order to accommodate higher recoveries. SUVpeak should be further

We propose to obtain deep multi-band NIRCam and NIRISS imaging of three resolved stellar systems within 1 Mpc (NOI 104). We will use this broad science program to optimize observational setups and to develop data reduction techniques that will be common to JWST studies of resolved stellar populations. We will combine our expertise in HST resolved star studies with these observations to design, test, and release pointspreadfunction (PSF) fitting software specific to JWST. PSF photometry is at the heart of resolved stellar populations studies, but is not part of the standard JWST reduction pipeline. Our program will establish JWST-optimized methodologies in six scientific areas: star formation histories, measurement of the sub-Solar mass stellar IMF, extinction maps, evolved stars, proper motions, and globular clusters, all of which will be common pursuits for JWST in the local Universe. Our observations of globular cluster M92, ultra-faint dwarf Draco II, and star-forming dwarf WLM, will be of high archival value for other science such as calibrating stellar evolution models, measuring properties of variable stars, and searching for metal-poor stars. We will release the results of our program, including PSF fitting software, matched HST and JWST catalogs, clear documentation, and step-by-step tutorials (e.g., Jupyter notebooks) for data reduction and science application, to the community prior to the Cycle 2 Call for Proposals. We will host a workshop to help community members plan their Cycle 2 observations of resolved stars. Our program will provide blueprints for the community to efficiently reduce and analyze JWST observations of resolved stellar populations.

We investigate morphological properties of 61 Lyα emitters (LAEs) at z = 4.86 identified in the COSMOS field, based on Hubble Space Telescope Advanced Camera for Surveys (ACS) imaging data in the F814W band. Out of the 61 LAEs, we find the ACS counterparts for 54 LAEs. Eight LAEs show double-component structures with a mean projected separation of 0.″63 (∼4.0 kpc at z = 4.86). Considering the faintness of these ACS sources, we carefully evaluate their morphological properties, that is, size and ellipticity. While some of them are compact and indistinguishable from the point-spreadfunction (PSF) half-light radius of 0.″07 (∼0.45 kpc), the others are clearly larger than the PSF size and spatially extended up to 0.″3 (∼1.9 kpc). We find that the ACS sources show a positive correlation between ellipticity and size and that the ACS sources with large size and round shape are absent. Our Monte Carlo simulation suggests that the correlation can be explained by (1) the deformation effects via PSF broadening and shot noise or (2) the source blending in which two or more sources with small separation are blended in our ACS image and detected as a single elongated source. Therefore, the 46 single-component LAEs could contain the sources that consist of double (or multiple) components with small spatial separation (i.e., ≲0.″3 or 1.9 kpc). Further observation with high angular resolution at longer wavelengths (e.g., rest-frame wavelengths of ≳4000 Å) is inevitable to decipher which interpretation is adequate for our LAE sample.

Accurate determination of the line spread function (LSF) on the basis of the edge processing algorithm in X-ray imaging systems is one of the most basic procedures for evaluating the performance of such systems. Extensive research has been focused on algorithms for the precise or fast measurement of the LSF in digital X-ray systems. Most of the standard methods for evaluating the performance of an imaging system are based on a fully digitalized radiographic system or a film-based system. However, images obtained by computed radiography (CR), which converts a captured analog signal into a digital image through an analog-to-digital converting scanner, show the combined characteristics of analog and digital imaging systems. Fundamentally, the characteristics of digital imaging systems differ substantially from those of film imaging systems because of their different methods of acquiring and displaying image data. In addition, a system with both analog and digital component has characteristics that differ from those of both digital and analog systems. In this research, we present a new modulation transfer function (MTF) that mimics the existing MTF in terms of measurement but satisfies existing standard protocols through modification of the hypothesis contents. In the case of the LSF and the pointspreadfunction measured with a CR system, the developed edge algorithm shows better performance than the conventional methods. We also demonstrate the usefulness of this method in an actual measurement with a CR digital X-ray imaging system.

To investigate the sensitivity of a Monte Carlo (MC) model of a standard clinical amorphous silicon (a-Si) electron portal imaging device (EPID) to variations in optical photon transport parameters. The Geant4 MC toolkit was used to develop a comprehensive model of an indirect-detection a-Si EPID incorporating x-ray and optical photon transport. The EPID was modeled as a series of uniform layers with properties specified by the manufacturer (PerkinElmer, Santa Clara, CA) of a research EPID at our centre. Optical processes that were modeled include bulk absorption, Rayleigh scattering, and boundary processes (reflection and refraction). Model performance was evaluated by scoring optical photons absorbed by the a-Si photodiode as a function of radial distance from a point source of x-rays on an event-by-event basis (0.025 mm resolution). Primary x-ray energies were sampled from a clinical 6 MV photon spectrum. Simulations were performed by varying optical transport parameters and the resulting pointspreadfunctions (PSFs) were compared. The optical parameters investigated include: x-ray transport cutoff thresholds; absorption path length; optical energy spectrum; refractive indices; and the 'roughness' of boundaries within phosphor screen layers. The transport cutoffs and refractive indices studied were found to minimally affect resulting PSFs. A monoenergetic optical spectrum slightly broadened the PSF in comparison with the use of a polyenergetic spectrum. The absorption path length only significantly altered the PSF when decreased drastically. Variations in the treatment of boundaries noticeably broadened resulting PSFs. Variation in optical transport parameters was found to affect resulting PSF calculations. Current work is focusing on repeating this analysis with a coarser resolution more typical of a commercial a-Si EPID to observe if these effects continue to alter the EPID PSF. Experimental measurement of the EPID line spread function to validate these

Key points Abnormal activation of motoneurons in the spinal cord by sensory pathways is thought to contribute to impaired movement control and spasticity in individuals with cerebral palsy.Here we use single motor unit recordings to show how individual motoneurons in the spinal cord respond to sensory inputs in a group of participants with cerebral palsy having different degrees of motor dysfunction.In participants who had problems walking independently and required assistive devices such as wheelchairs, sensory pathways only excited motoneurons in the spinal cord.In contrast, in participants with cerebral palsy who walked independently for long distances, sensory inputs both inhibited and excited motoneurons in the spinal cord, similar to what we found in uninjured control participants.These findings demonstrate that in individuals with severe cerebral palsy, inhibitory control of motoneurons from sensory pathways is reduced and may contribute to motor dysfunction and spasticity. Abstract Reduced inhibition of spinal motoneurons by sensory pathways may contribute to heightened reflex activity, spasticity and impaired motor function in individuals with cerebral palsy (CP). To measure if the activation of inhibitory post‐synaptic potentials (IPSPs) by sensory inputs is reduced in CP, the tonic discharge rate of single motor units from the soleus muscle was plotted time‐locked to the occurrence of a sensory stimulation to produce peri‐stimulus frequencygrams (PSFs). Stimulation to the medial arch of the foot was used to activate cutaneomuscular afferents in 17 adults with bilateral spastic CP and 15 neurologically intact (NI) peers. Evidence of IPSP activation from the PSF profiles, namely a marked pause or reduction in motor unit firing rates at the onset of the cutaneomuscular reflex, was found in all NI participants but in only half of participants with CP. In the other half of the participants with CP, stimulation of cutaneomuscular afferents produced a PSF

Integrated clinical whole-body PET/MR systems were introduced in 2010. In order to bring this technology into clinical usage, it is of great importance to compare the performance with the well-established PET/CT. The aim of this study was to evaluate PET performance, with focus on image quality, on Siemens Biograph mMR (PET/MR) and Siemens Biograph mCT (PET/CT). A direct quantitative comparison of the performance characteristics between the mMR and mCT system was performed according to National Electrical Manufacturers Association (NEMA) NU 2-2007 protocol. Spatial resolution, sensitivity, count rate and image quality were evaluated. The evaluation was supplemented with additional standardized uptake value (SUV) measurements. The spatial resolution was similar for the two systems. Average sensitivity was higher for the mMR (13.3 kcps/MBq) compared to the mCT system (10.0 kcps/MBq). Peak noise equivalent count rate (NECR) was slightly higher for the mMR (196 kcps @ 24.4 kBq/mL) compared to the mCT (186 kcps @ 30.1 kBq/mL). Scatter fractions in the clinical activity concentration range yielded lower values for the mCT (34.9 %) compared to those for the mMR (37.0 %). Best image quality of the systems resulted in approximately the same mean hot sphere contrast and a difference of 19 percentage points (pp) in mean cold contrast, in favour of the mCT. In general, pointspreadfunction (PSF) increased hot contrast and time of flight (TOF) increased both hot and cold contrast. Highest hot contrast for the smallest sphere (10 mm) was achieved with the combination of TOF and PSF on the mCT. Lung residual error was higher for the mMR (22 %) than that for the mCT (17 %), with no effect of PSF. With TOF, lung residual error was reduced to 8 % (mCT). SUV was accurate for both systems, but PSF caused overestimations for the 13-, 17- and 22-mm spheres. Both systems proved good performance characteristics, and the PET image quality of the mMR was close to that of the m

Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ = 0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ = 0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that

Plants affect microbial communities and abiotic properties of nearby soils, which in turn influence plant growth and interspecific interaction, forming a plant-soil feedback (PSF). PSF is a key determinant influencing plant population dynamics, community structure, and ecosystem functions. Despite accumulating evidence for the importance of PSF and development of specific PSF models, different models are not yet fully integrated. Here, we review the theoretical progress in understanding PSF. When first proposed, PSF was integrated with various mathematical frameworks to discuss its influence on plant competition. Recent theoretical models have advanced PSF research at different levels of ecological organizations by considering multiple species, applying spatially explicit simulations to examine how local-scale predictions apply to larger scales, and assessing the effect of PSF on plant temporal dynamics over the course of succession. We then review two foundational models for microbial- and litter-mediated PSF. We present a theoretical framework to illustrate that although the two models are typically presented separately, their behavior can be understood together by invasibility analysis. We conclude with suggestions for future directions in PSF theoretical studies, which include specifically addressing microbial diversity to integrate litter- and microbial-mediated PSF, and apply PSF to general coexistence theory through a trait-based approach.

The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the PointSpreadFunction (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.

Restoring motion-blurred image is the key technologies in the opto-electronic detection system. The imaging sensors such as CCD and infrared imaging sensor, which are mounted on the motion platforms, quickly move together with the platforms of high speed. As a result, the images become blur. The image degradation will cause great trouble for the succeeding jobs such as objects detection, target recognition and tracking. So the motion-blurred images must be restoration before detecting motion targets in the subsequent images. On the demand of the real weapon task, in order to deal with targets in the complex background, this dissertation uses the new theories in the field of image processing and computer vision to research the new technology of motion deblurring and motion detection. The principle content is as follows: 1) When the prior knowledge about degradation function is unknown, the uniform motion blurred images are restored. At first, the blur parameters, including the motion blur extent and direction of PSF(pointspreadfunction), are estimated individually in domain of logarithmic frequency. The direction of PSF is calculated by extracting the central light line of the spectrum, and the extent is computed by minimizing the correction between the fourier spectrum of the blurred image and a detecting function. Moreover, in order to remove the strip in the deblurred image, windows technique is employed in the algorithm, which makes the deblurred image clear. 2) According to the principle of infrared image non-uniform exposure, a new restoration model for infrared blurred images is developed. The fitting of infrared image non-uniform exposure curve is performed by experiment data. The blurred images are restored by the fitting curve.

We present a general framework for matching the point-spreadfunction (PSF), photometric scaling and sky background between two images, a subject which is commonly referred to as difference image analysis (DIA). We introduce the new concept of a spatially varying photometric scale factor which will be important for DIA applied to wide-field imaging data in order to adapt to transparency and airmass variations across the field-of-view. Furthermore, we demonstrate how to separately control the degree of spatial variation of each kernel basis function, the photometric scale factor and the differential sky background. We discuss the common choices for kernel basis functions within our framework, and we introduce the mixed-resolution delta basis functions to address the problem of the size of the least-squares problem to be solved when using delta basis functions. We validate and demonstrate our algorithm on simulated and real data. We also describe a number of useful optimizations that may be capitalized on during the construction of the least-squares matrix and which have not been reported previously. We pay special attention to presenting a clear notation for the DIA equations which are set out in a way that will hopefully encourage developers to tackle the implementation of DIA software.

Double mask edge illumination (DM-EI) set-ups can detect differential phase and attenuation information from a sample. However, analytical separation of the two signals often requires acquiring two frames with inverted differential phase contrast signals. Typically, between these two acquisitions, the first mask is moved to create a different illumination condition. This can lead to potential errors which adversely affect the data collected. In this paper, we implement a single mask EI laboratory set-up that allows for a single shot retrieval of the differential phase and attenuation images, without the need for a high resolution detector or high magnification. As well as simplifying mask alignment, the advantages of the proposed set-up can be exploited in one of two ways: either the total acquisition time can be halved with respect to the DM-EI set-up or, for the same acquisition time, twice the statistics can be collected. In this latter configuration, the signal-to-noise ratio and contrast in the mixed intensity images, and the angular sensitivity of the two set-ups were compared. We also show that the angular sensitivity of the single mask set-up can be well approximated from its illumination curve, which has been modelled as a convolution between the source spatial distribution at the detector plane, the pre-sample mask and the detector pointspreadfunction (PSF). A polychromatic wave optics simulation was developed on these bases and benchmarked against experimental data. It can also be used to predict the angular sensitivity and contrast of any set-up as a function of detector PSF.

Double mask edge illumination (DM-EI) set-ups can detect differential phase and attenuation information from a sample. However, analytical separation of the two signals often requires acquiring two frames with inverted differential phase contrast signals. Typically, between these two acquisitions, the first mask is moved to create a different illumination condition. This can lead to potential errors which adversely affect the data collected. In this paper, we implement a single mask EI laboratory set-up that allows for a single shot retrieval of the differential phase and attenuation images, without the need for a high resolution detector or high magnification. As well as simplifying mask alignment, the advantages of the proposed set-up can be exploited in one of two ways: either the total acquisition time can be halved with respect to the DM-EI set-up or, for the same acquisition time, twice the statistics can be collected. In this latter configuration, the signal-to-noise ratio and contrast in the mixed intensity images, and the angular sensitivity of the two set-ups were compared. We also show that the angular sensitivity of the single mask set-up can be well approximated from its illumination curve, which has been modelled as a convolution between the source spatial distribution at the detector plane, the pre-sample mask and the detector pointspreadfunction (PSF). A polychromatic wave optics simulation was developed on these bases and benchmarked against experimental data. It can also be used to predict the angular sensitivity and contrast of any set-up as a function of detector PSF. (paper)

Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the pointspreadfunction - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

Positron emission tomography (PET) images are typically reconstructed with an in-plane pixel size of approximately 4 mm for cancer imaging. The objective of this work was to evaluate the effect of using smaller pixels on general oncologic lesion-detection. A series of observer studies was performed using experimental phantom data from the Utah PET Lesion Detection Database, which modeled whole-body FDG PET cancer imaging of a 92 kg patient. The data comprised 24 scans over 4 days on a Biograph mCT time-of-flight (TOF) PET/CT scanner, with up to 23 lesions (diam. 6-16 mm) distributed throughout the phantom each day. Images were reconstructed with 2.036 mm and 4.073 mm pixels using ordered-subsets expectation-maximization (OSEM) both with and without pointspreadfunction (PSF) modeling and TOF. Detection performance was assessed using the channelized non-prewhitened numerical observer with localization receiver operating characteristic (LROC) analysis. Tumor localization performance and the area under the LROC curve were then analyzed as functions of the pixel size. In all cases, the images with 2 mm pixels provided higher detection performance than those with 4 mm pixels. The degree of improvement from the smaller pixels was larger than that offered by PSF modeling for these data, and provided roughly half the benefit of using TOF. Key results were confirmed by two human observers, who read subsets of the test data. This study suggests that a significant improvement in tumor detection performance for PET can be attained by using smaller voxel sizes than commonly used at many centers. The primary drawback is a 4-fold increase in reconstruction time and data storage requirements.

SFPQ, (a.k.a. PSF), is a human tumor suppressor protein that regulates many important functions in the cell nucleus including coordination of long non-coding RNA molecules into nuclear bodies. Here we describe the first crystal structures of Splicing Factor Proline and Glutamine Rich (SFPQ), revealing structural similarity to the related PSPC1/NONO heterodimer and a strikingly extended structure (over 265 Å long) formed by an unusual anti-parallel coiled-coil that results in an infinite linear polymer of SFPQ dimers within the crystals. Small-angle X-ray scattering and transmission electron microscopy experiments show that polymerization is reversible in solution and can be templated by DNA. We demonstrate that the ability to polymerize is essential for the cellular functions of SFPQ: disruptive mutation of the coiled-coil interaction motif results in SFPQ mislocalization, reduced formation of nuclear bodies, abrogated molecular interactions and deficient transcriptional regulation. The coiled-coil interaction motif thus provides a molecular explanation for the functional aggregation of SFPQ that directs its role in regulating many aspects of cellular nucleic acid metabolism. PMID:25765647

as the extraction and transport of anionic species and ion pairs including cesium halide and sulfate salts. It is divided into seven sections. The first section describes the synthetic methods employed to functionalized calix[4]pyrrole. The second section focuses on functionalized calix[4]pyrroles that display...... enhanced anion binding properties compared to the non-functionalized parent system, octamethylcalix[4]pyrrole. The use of functionalized calix[4]pyrroles containing a fluorescent group or functionalized calix[4]pyrroles as building blocks for the preparation of stimulus-responsive materials is discussed...... and the eventual development of therapeutics that function via the transport of anions across cell membranes, are discussed....

Entire Functions focuses on complex numbers and the algebraic operations on them and the basic principles of mathematical analysis.The book first elaborates on the concept of an entire function, including the natural generalization of the concept of a polynomial and power series. The text then takes a look at the maximum absolute value and the order of an entire function, as well as calculations for the coefficients of power series representing a given function, use of integrals, and complex numbers. The publication elaborates on the zeros of an entire function and the fundamen

Gaofen-4 is China's first geosynchronous orbit high-definition optical imaging satellite with extremely high temporal resolution. The features of staring imaging and high temporal resolution enable the super-resolution of multiple images of the same scene. In this paper, we propose a super-resolution (SR) technique to reconstruct a higher-resolution image from multiple low-resolution (LR) satellite images. The method first performs image registration in both the spatial and range domains. Then the pointspreadfunction (PSF) of LR images is parameterized by a Gaussian function and estimated by a blind deconvolution algorithm based on the maximum a posteriori (MAP). Finally, the high-resolution (HR) image is reconstructed by a MAP-based SR algorithm. The MAP cost function includes a data fidelity term and a regularized term. The data fidelity term is in the L₂ norm, and the regularized term employs the Huber-Markov prior which can reduce the noise and artifacts while preserving the image edges. Experiments with real Gaofen-4 images show that the reconstructed images are sharper and contain more details than Google Earth ones.

We present a catalog covering 1.62 deg 2 of the COSMOS/UltraVISTA field with point-spreadfunction (PSF) matched photometry in 30 photometric bands. The catalog covers the wavelength range 0.15-24 μm including the available GALEX, Subaru, Canada-France-Hawaii Telescope, VISTA, and Spitzer data. Catalog sources have been selected from the DR1 UltraVISTA K s band imaging that reaches a depth of K s,tot = 23.4 AB (90% completeness). The PSF-matched catalog is generated using position-dependent PSFs ensuring accurate colors across the entire field. Also included is a catalog of photometric redshifts (z phot ) for all galaxies computed with the EAZY code. Comparison with spectroscopy from the zCOSMOS 10k bright sample shows that up to z ∼ 1.5 the z phot are accurate to Δz/(1 + z) = 0.013, with a catastrophic outlier fraction of only 1.6%. The z phot also show good agreement with the z phot from the NEWFIRM Medium Band Survey out to z ∼ 3. A catalog of stellar masses and stellar population parameters for galaxies determined using the FAST spectral energy distribution fitting code is provided for all galaxies. Also included are rest-frame U – V and V – J colors, L 2800 and L IR . The UVJ color-color diagram confirms that the galaxy bi-modality is well-established out to z ∼ 2. Star-forming galaxies also obey a star-forming 'main sequence' out to z ∼ 2.5, and this sequence evolves in a manner consistent with previous measurements. The COSMOS/UltraVISTA K s -selected catalog covers a unique parameter space in both depth, area, and multi-wavelength coverage and promises to be a useful tool for studying the growth of the galaxy population out to z ∼ 3-4.

X-ray mirrors with high focusing performances are in use in both mirror modules for X-ray telescopes and in synchrotron and FEL (Free Electron Laser) beamlines. A degradation of the focus sharpness arises in general from geometrical deformations and surface roughness, the former usually described by geometrical optics and the latter by physical optics. In general, technological developments are aimed at a very tight focusing, which requires the mirror profile to comply with the nominal shape as much as possible and to keep the roughness at a negligible level. However, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators as done at the EIS-TIMEX beamline of FERMI@Elettra. The resulting profile can be characterized with a Long Trace Profilometer and correlated with the expected optical quality via a wavefront propagation code. However, if the roughness contribution can be neglected, the computation can be performed via a ray-tracing routine, and, under opportune assumptions, the focal spot profile (the PointSpreadFunction, PSF) can even be predicted analytically. The advantage of this approach is that the analytical relation can be reversed; i.e., from the desired PSF the required mirror profile can be computed easily, thereby avoiding the use of complex and time-consuming numerical codes. The method can also be suited in the case of spatially inhomogeneous beam intensities, as commonly experienced at synchrotrons and FELs. In this work we expose the analytical method and the application to the beam shaping problem.

X-ray mirrors with high focusing performances are in use in both mirror modules for X-ray telescopes and in synchrotron and FEL (Free Electron Laser) beamlines. A degradation of the focus sharpness arises in general from geometrical deformations and surface roughness, the former usually described by geometrical optics and the latter by physical optics. In general, technological developments are aimed at a very tight focusing, which requires the mirror profile to comply with the nominal shape as much as possible and to keep the roughness at a negligible level. However, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators as done at the EIS-TIMEX beamline of FERMI@Elettra. The resulting profile can be characterized with a Long Trace Profilometer and correlated with the expected optical quality via a wavefront propagation code. However, if the roughness contribution can be neglected, the computation can be performed via a ray-tracing routine, and, under opportune assumptions, the focal spot profile (the PointSpreadFunction, PSF) can even be predicted analytically. The advantage of this approach is that the analytical relation can be reversed; i.e., from the desired PSF the required mirror profile can be computed easily, thereby avoiding the use of complex and time-consuming numerical codes. The method can also be suited in the case of spatially inhomogeneous beam intensities, as commonly experienced at synchrotrons and FELs. In this work we expose the analytical method and the application to the beam shaping problem

We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation pointspreadfunction (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

We present a method of signal restoration to improve the signal-to-noise ratio, sharpen seismic arrival onset, and act as an empirical source deconvolution of specific seismic arrivals. Observed time-series gi are modelled as a convolution of a simpler time-series fi, and an invariant pointspreadfunction (PSF) h that attempts to account for the earthquake source process. The method is used on the shear wave time window containing SKS and S, whereby using a Gaussian PSF produces more impulsive, narrower, signals in the wave train. The resulting restored time-series facilitates more accurate and objective relative traveltime estimation of the individual seismic arrivals. We demonstrate the accuracy of the reconstruction method on synthetic seismograms generated by the reflectivity method. Clean and sharp reconstructions are obtained with real data, even for signals with relatively high noise content. Reconstructed signals are simpler, more impulsive, and narrower, which allows highlighting of some details of arrivals that are not readily apparent in raw waveforms. In particular, phases nearly coincident in time can be separately identified after processing. This is demonstrated for two seismic wave pairs used to probe deep mantle and core-mantle boundary structure: (1) the Sab and Scd arrivals, which travel above and within, respectively, a 200-300-km-thick, higher than average shear wave velocity layer at the base of the mantle, observable in the 88-92 deg epicentral distance range and (2) SKS and SPdiff KS, which are core waves with the latter having short arcs of P-wave diffraction, and are nearly identical in timing near 108-110 deg in distance. A Java/Matlab algorithm was developed for the signal restoration, which can be downloaded from the authors web page, along with example data and synthetic seismograms.

SPECT images suffer from poor spatial resolution, which leads to partial volume effects due to cross-talk between different anatomical regions. By utilising high-resolution structural images (CT or MRI) it is possible to compensate for these effects. Traditional partial volume correction (PVC) methods suffer from various limitations, such as correcting a single region only, returning only regional mean values, or assuming a stationary pointspreadfunction (PSF). We recently presented a novel method in which PVC was combined with the reconstruction process in order to take into account the distance dependent PSF in SPECT, which was based on filtered backprojection (FBP) reconstruction. We now present a new method based on the iterative OSEM algorithm, which has advantageous noise properties compared to FBP. We have applied this method to a series of 10 brain SPECT studies performed on healthy volunteers using the DATSCAN tracer. T1-weighted MRI images were co-registered to the SPECT data and segmented into 33 anatomical regions. The SPECT data were reconstructed using OSEM, and PVC was applied in the projection domain at each iteration. The correction factors were calculated by forward projection of a piece-wise constant image, generated from the segmented MRI. Images were also reconstructed using FBP and standard OSEM with and without resolution recovery (RR) for comparison. The images were evaluated in terms of striatal contrast and regional variability (CoV). The mean striatal contrast obtained with OSEM, OSEM-RR and OSEM-PVC relative to FBP were 1.04, 1.42 and 1.53, respectively, and the mean striatal CoV values are 1.05, 1.53, 1.07. Both OSEM-RR and OSEM-PVC results in images with significantly higher contrast as compared to FBP or OSEM, but OSEM-PVC avoids the increased regional variability of OSEM-RR due to improved structural definition.

Context. The Cherenkov Telescope Array (CTA) represents the most advanced facility designed for Cherenkov Astronomy. ASTRI SST-2M has been developed as a demonstrator for the Small Size Telescope in the context of the upcoming CTA. Its main innovation consists in the optical layout which implements the Schwarzschild-Couder configuration and is fully validated for the first time. The ASTRI SST-2M optical system represents the first qualified example of a two-mirror telescope for Cherenkov Astronomy. This configuration permits us to (i) maintain high optical quality across a large field of view; (ii) demagnify the plate scale; and (iii) exploit new technological solutions for focal plane sensors. Aims: The goal of the paper is to present the optical qualification of the ASTRI SST-2M telescope. The qualification has been obtained measuring the pointspreadfunction (PSF) sizes generated in the focal plane at various distances from the optical axis. These values have been compared with the performances expected by design. Methods: After an introduction on Gamma-ray Astronomy from the ground, the optical design of ASTRI SST-2M and how it has been implemented is discussed. Moreover, the description of the set-up used to qualify the telescope over the full field of view is shown. Results: We report the results of the first-light optical qualification. The required specification of a flat PSF of 10 arcmin in a large field of view ( 10°) has been demonstrated. These results validate the design specifications, opening a new scenario for Cherenkov Gamma-ray Astronomy and, in particular, for the detection of high-energy (5-300 TeV) gamma rays and wide-field observations with CTA.

(123)I-labelled radioligands are commonly used for single-photon emission computed tomography (SPECT) imaging of the dopaminergic system to study the dopamine transporter binding. The aim of this work was to compare the quantitative capabilities of two different SPECT systems through Monte Carlo (MC) simulation. The SimSET MC code was employed to generate simulated projections of a numerical phantom for two gamma cameras equipped with a parallel and a fan-beam collimator, respectively. A fully 3D iterative reconstruction algorithm was used to compensate for attenuation, the spatially variant pointspreadfunction (PSF) and scatter. A post-reconstruction partial volume effect (PVE) compensation was also developed. For both systems, the correction for all degradations and PVE compensation resulted in recovery factors of the theoretical specific uptake ratio (SUR) close to 100%. For a SUR value of 4, the recovered SUR for the parallel imaging system was 33% for a reconstruction without corrections (OSEM), 45% for a reconstruction with attenuation correction (OSEM-A), 56% for a 3D reconstruction with attenuation and PSF corrections (OSEM-AP), 68% for OSEM-AP with scatter correction (OSEM-APS) and 97% for OSEM-APS plus PVE compensation (OSEM-APSV). For the fan-beam imaging system, the recovered SUR was 41% without corrections, 55% for OSEM-A, 65% for OSEM-AP, 75% for OSEM-APS and 102% for OSEM-APSV. Our findings indicate that the correction for degradations increases the quantification accuracy, with PVE compensation playing a major role in the SUR quantification. The proposed methodology allows us to reach similar SUR values for different SPECT systems, thereby allowing a reliable standardisation in multicentric studies.

The airtarget detection by a thermal camera is a typical problem of 'hot spot detection' and the knowledge of the available energy on the infrared sensor becomes a critical item to analyse. In order to evaluate the performances of an infrared system in search and track threat warning or passive surveillance, it is necessary to compute the Signal to Noise Ratio (SNR) of the system. The maximization of the SNR is an important goal to assure long detection ranges against stealth threats or cruise missiles with very low emissivities. The large number of detectors is just one of the requirements for this kind of application and some energetic considerations lead up to consider particular geometrical array configurations. Usually, in the SNR evaluation it is assumed that all the energy from a target is focused by the optical system on a single detector element of the array. However, the image of a point source on the focal plane has a finite extent (spot) and its energy distribution is given by the PointSpreadFunction (PSF) of the optics. The interaction of the finite spot size with the array gives rise to a spreading of the energy impinging on the individual detectors, which causes a decrease of performances. In this paper a statistical evaluation of the loss of energy impinging on the detector due to the finite image size of point targets was performed through a Monte Carlo simulation. By considering the maximum of the energy integrated by a single detector, it is possible to compute the effective SNR of the system. A new figure of merit, called Spreading Factor (SF), defined as the ratio between the maximum of the energy integrated by the single detector of the array and the total energy subtended by the PSF, permits to evaluate the capability of a detector array to detect point sources. Some typical detector and system configurations with their technological impacts have been examined.

In Electron Beam Lithography (EBL), the modeling of the Proximity Effects (PE) is the key to successfully print patterns of different size and density at the desired dimension. Although current PE models are increasingly efficient for nominal process conditions, they do not allow covering a broad exposure dose range, which would be interesting for extending the process window, for instance. This paper shows how to improve the accuracy of the dimension estimations of overexposed patterns by EBL by adding a new term to the existing compact model. This advanced compact model was inspired by the chemical mechanisms that activate the acid generator embedded in the resist during the EBL exposure. Most of the existing compact models use the electronic Aerial Image (E AEI) calculated by the convolution product of the patterns geometry with a PointSpreadFunction (PSF) and extract pattern contours using a threshold value to model the non-linear resist behavior [1]. Here the patterns contours are simulated using an Acid Aerial Image (A AEI) calculated from the initial E_AEI complemented by the Dill transformation [1]. A strong impact is expected at high exposure doses but no changes should occur on patterns exposed close to their nominal dose. The modeling and calibration capabilities of Inscale® software was used to validate the new model with experimental measurements. Calibration and simulations obtained with both standard model and advanced model were compared on a test design. First it shows that after calibration the PSF of the two models are similar, meaning that physics is consistent for both models. The new advanced model allows maintaining the accuracy at nominal dose but increases the overall accuracy by 62 % for a process window of dose with latitude extended up to 20%.

We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from -40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The pointspreadfunction (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99m Tc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at -25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99m Tc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at -25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature.

Galaxies with stellar masses {10}-7.4 yr‑1 were examined on images of the Hubble Space Telescope Frontier Field Parallels for Abell 2744 and MACS J0416.1-02403. They appear as unresolved “Little Blue Dots” (LBDs). They are less massive and have higher specific star formation rates (sSFRs) than “blueberries” studied by Yang et al. and higher sSFRs than “Blue Nuggets” studied by Tacchella et al. We divided the LBDs into three redshift bins and, for each, stacked the B435, V606, and I814 images convolved to the same stellar point-spreadfunction (PSF). Their radii were determined from PSF deconvolution to be ∼80 to ∼180 pc. The high sSFRs suggest that their entire stellar mass has formed in only 1% of the local age of the universe. The sSFRs at similar epochs in local dwarf galaxies are lower by a factor of ∼100. Assuming that the star formation rate is {ε }{ff}{M}{gas}/{t}{ff} for efficiency {ε }{ff}, gas mass M gas, and free-fall time, t ff, the gas mass and gas-to-star mass ratio are determined. This ratio exceeds 1 for reasonable efficiencies, and is likely to be ∼5 even with a high {ε }{ff} of 0.1. We consider whether these regions are forming today’s globular clusters. With their observed stellar masses, the maximum likely cluster mass is ∼ {10}5 {M}ȯ , but if star formation continues at the current rate for ∼ 10{t}{ff}∼ 50 {Myr} before feedback and gas exhaustion stop it, then the maximum cluster mass could become ∼ {10}6 {M}ȯ .

Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate pointspreadfunction (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

Earth's atmosphere can significantly impact the propagation of electromagnetic radiation, degrading the performance of imaging systems. Deleterious effects of the atmosphere include turbulence, absorption and scattering by particulates. Turbulence leads to blurring, while absorption attenuates the energy that reaches imaging sensors. The optical properties of aerosols and clouds also impact radiation propagation via scattering, resulting in decorrelation from unscattered light. Models have been proposed for calculating a pointspreadfunction (PSF) for aerosol scattering, providing a method for simulating the contrast and spatial detail expected when imaging through atmospheres with significant aerosol optical depth. However, these synthetic images and their predicating theory would benefit from comparison with measurements in a controlled environment. Recently, Michigan Technological University (MTU) has designed a novel laboratory cloud chamber. This multiphase, turbulent "Pi Chamber" is capable of pressures down to 100 hPa and temperatures from -55 to +55°C. Additionally, humidity and aerosol concentrations are controllable. These boundary conditions can be combined to form and sustain clouds in an instrumented laboratory setting for measuring the impact of clouds on radiation propagation. This paper describes an experiment to generate mixing and expansion clouds in supersaturated conditions with salt aerosols, and an example of measured imagery viewed through the generated cloud is shown. Aerosol and cloud droplet distributions measured during the experiment are used to predict scattering PSF and MTF curves, and a methodology for validating existing theory is detailed. Measured atmospheric inputs will be used to simulate aerosol-induced image degradation for comparison with measured imagery taken through actual cloud conditions. The aerosol MTF will be experimentally calculated and compared to theoretical expressions. The key result of this study is the

From about December 2000 to January 2001 the Ion and Neutral Camera (INCA) imaged Jupiter in Energetic Neutral Atoms (ENA) from a distance of about 137-250 Jovian planetary radii (RJ) over an energy range from about 10 to 300 keV. A forward model is employed to derive column densities and assumes a neutral gas-plasma model and an energetic ion distribution based on Galileo in-situ measurements. We demonstrate that Jupiter observations by INCA are consistent with a column density peaking around Europa's orbit in the range from 2x1012 cm-2 to 7x1012 cm-2, assuming H2, and are consistent with the upper limits reported from the Cassini/UVIS observations. Most of the INCA observations are consistent with a roughly azimuthally symmetric gas distribution, but some appear consistent with an asymmetric gas distribution centred on Europa, which would directly imply that Europa is the source of the gas. Although our neutral gas model assumes a Europa source, we explore other explanations of the INCA observations including: (1) ENAs are produced by charge exchange between energetic ions and neutral hydrogen originating from charge-exchanged protons in the Io plasma torus. However, estimated densities by Cheng (1986) are about one order of magnitude too low to explain the INCA observations; (2) ENAs are produced by charge exchange between energetic ions and plasma ions such as O+ and S+ originating from Io. However, that would require O+ plasma densities higher than expected to compensate for the low charge-exchange cross section between protons and O+; (3) We re-examine the INCA Point-SpreadFunction (PSF) to determine if the ENA emissions in the vicinity of Europa's orbit could be explained by internal scattering of ENAs originating from Jupiter's high-latitude upper atmosphere. However, the PSF was well constrained by using Jupiter from distances where it could be considered a point source.

Ever smaller Near Earth Objects (NEOs) continue to be discovered, with most potentially hazardous ones already surveyed and ongoing plans for space missions to deflect and mine them in the near future. These transitional objects in relatively unstable orbits have recently experienced collisional or dynamical encounters that have sent them to Earth’s vicinity. Finding comet-like activity (sublimation and ejected dust) is necessary to understand their origin, recent history, and evolution. Mommert et al (2014) have recently discovered cometary activity on the third largest NEO (3552) Don Quixote using near-Infrared imaging from Spitzer/IRAC they detect both a coma and tail as extended emission they identify as CO2 ice sublimation. This activity has gone unnoticed due to either sporadic activity or the relatively low surface brightness in optical wavelengths of light reflecting off dust, 26 mag/arcsec2 which necessarily imposes an extreme bias against detection. We propose to find this activity directly in the optical by going above the atmosphere.We are developing a 6U Cubesat to carry a 20cm aperture telescope. The volume restrictions impose a deployment system design for the telescope. We will study the optimal mission and optical setup for our goals, including the feasibility of a novel coronagraph to increase the sensitivity. Detecting NEO activity requires stability and low instrumental noise over many hours. Atmosphere’s varying pointspreadfunction (PSF), coupled with the extended PSF of reflective telescopes, lead us to propose to develop the concept and technology to manage a refractive telescope in space with the potential inclusion of a coronagraph, optimized for detecting faint features near bright targets. The experiment considers targeting nearby NEOs and optimizing observations for low surface brightness.

123 I-labelled radioligands are commonly used for single-photon emission computed tomography (SPECT) imaging of the dopaminergic system to study the dopamine transporter binding. The aim of this work was to compare the quantitative capabilities of two different SPECT systems through Monte Carlo (MC) simulation. The SimSET MC code was employed to generate simulated projections of a numerical phantom for two gamma cameras equipped with a parallel and a fan-beam collimator, respectively. A fully 3D iterative reconstruction algorithm was used to compensate for attenuation, the spatially variant pointspreadfunction (PSF) and scatter. A post-reconstruction partial volume effect (PVE) compensation was also developed. For both systems, the correction for all degradations and PVE compensation resulted in recovery factors of the theoretical specific uptake ratio (SUR) close to 100%. For a SUR value of 4, the recovered SUR for the parallel imaging system was 33% for a reconstruction without corrections (OSEM), 45% for a reconstruction with attenuation correction (OSEM-A), 56% for a 3D reconstruction with attenuation and PSF corrections (OSEM-AP), 68% for OSEM-AP with scatter correction (OSEM-APS) and 97% for OSEM-APS plus PVE compensation (OSEM-APSV). For the fan-beam imaging system, the recovered SUR was 41% without corrections, 55% for OSEM-A, 65% for OSEM-AP, 75% for OSEM-APS and 102% for OSEM-APSV. Our findings indicate that the correction for degradations increases the quantification accuracy, with PVE compensation playing a major role in the SUR quantification. The proposed methodology allows us to reach similar SUR values for different SPECT systems, thereby allowing a reliable standardisation in multicentric studies. (orig.)

{sup 123}I-labelled radioligands are commonly used for single-photon emission computed tomography (SPECT) imaging of the dopaminergic system to study the dopamine transporter binding. The aim of this work was to compare the quantitative capabilities of two different SPECT systems through Monte Carlo (MC) simulation. The SimSET MC code was employed to generate simulated projections of a numerical phantom for two gamma cameras equipped with a parallel and a fan-beam collimator, respectively. A fully 3D iterative reconstruction algorithm was used to compensate for attenuation, the spatially variant pointspreadfunction (PSF) and scatter. A post-reconstruction partial volume effect (PVE) compensation was also developed. For both systems, the correction for all degradations and PVE compensation resulted in recovery factors of the theoretical specific uptake ratio (SUR) close to 100%. For a SUR value of 4, the recovered SUR for the parallel imaging system was 33% for a reconstruction without corrections (OSEM), 45% for a reconstruction with attenuation correction (OSEM-A), 56% for a 3D reconstruction with attenuation and PSF corrections (OSEM-AP), 68% for OSEM-AP with scatter correction (OSEM-APS) and 97% for OSEM-APS plus PVE compensation (OSEM-APSV). For the fan-beam imaging system, the recovered SUR was 41% without corrections, 55% for OSEM-A, 65% for OSEM-AP, 75% for OSEM-APS and 102% for OSEM-APSV. Our findings indicate that the correction for degradations increases the quantification accuracy, with PVE compensation playing a major role in the SUR quantification. The proposed methodology allows us to reach similar SUR values for different SPECT systems, thereby allowing a reliable standardisation in multicentric studies. (orig.)

The point-spreadfunction (PSF) is used in optics for design and assessment of the imaging capabilities of an optical system. It is therefore of vital importance that this PSF can be calculated fast and accurately. In the past 12 years, the Extended Nijboer-Zernike (ENZ) approach has been developed for the purpose of semi-analytic evaluation of the PSF, for circularly symmetric optical systems, in the focal region. In the earliest ENZ-years, the Debye approximation of the diffraction integral, by which the PSF is given, was considered for the very basic situation of a low-NA optical system and relatively small defocus values, so that a scalar treatment was allowed with a focal factor comprising a quadratic function in the exponential. At present, the ENZ-method allows calculation of the PSF in low- and high-NA cases, in scalar form and for vector fields (including polarization), for large wave-front aberrations, including amplitude non-uniformities, using a quasi-spherical phase focal factor in a virtually unlimited focal range around the focal plane, and no limitations in the off-axis direction. Additionally, the application range of the method has been broadened and generalized to the calculation of aerial images of extended objects by including the finite distance of the object to the entrance pupil. Also imaging into a multi-layer is now possible by accounting for both forward and backward propagation in the layers. In the advanced ENZ-approach, the generalized, complex-valued pupil function is developed into a series of Zernike circle polynomials, with exponential azimuthal dependence (having cosine/sine azimuthal dependence as special cases). For each Zernike term, the diffraction integral reduces after azimuthal integration to an integral that can be expressed as an infinite double series involving spherical Bessel functions, accounting for the parameters of the optical system and the defocus value, and Jinc functions comprising the radial off-axis value

The present study aims at evaluating and comparing electrical and magnetic distributed source imaging methods applied to high-density Electroencephalography (hdEEG) and Magnetoencephalography (MEG) data. We used resolution matrices to characterize spatial resolution properties of Minimum Norm Estimate (MNE), dynamic Statistical Parametric Mapping (dSPM), standardized Low-Resolution Electromagnetic Tomography (sLORETA) and coherent Maximum Entropy on the Mean (cMEM, an entropy-based technique). The resolution matrix provides information of the PointSpreadFunctions (PSF) and of the Crosstalk functions (CT), this latter being also called source leakage, as it reflects the influence of a source on its neighbors. The spatial resolution of the inverse operators was first evaluated theoretically and then with real data acquired using electrical median nerve stimulation on five healthy participants. We evaluated the Dipole Localization Error (DLE) and the Spatial Dispersion (SD) of each PSF and CT map. cMEM showed the smallest spatial spread (SD) for both PSF and CT maps, whereas localization errors (DLE) were similar for all methods. Whereas cMEM SD values were lower in MEG compared to hdEEG, the other methods slightly favored hdEEG over MEG. In real data, cMEM provided similar localization error and significantly less spatial spread than other methods for both MEG and hdEEG. Whereas both MEG and hdEEG provided very accurate localizations, all the source imaging methods actually performed better in MEG compared to hdEEG according to all evaluation metrics, probably due to the higher signal-to-noise ratio of the data in MEG. Our overall results show that all investigated methods provide similar localization errors, suggesting very accurate localization for both MEG and hdEEG when similar number of sensors are considered for both modalities. Intrinsic properties of source imaging methods as well as their behavior for well-controlled tasks, suggest an overall better

We present a general method for detecting and correcting biases in the outputs of particle-tracking experiments. Our approach is based on the histogram of estimated positions within pixels, which we term the single-pixel interior filling function (SPIFF). We use the deviation of the SPIFF from a uniform distribution to test the veracity of tracking analyses from different algorithms. Unbiased SPIFFs correspond to uniform pixel filling, whereas biased ones exhibit pixel locking, in which the estimated particle positions concentrate toward the centers of pixels. Although pixel locking is a well-known phenomenon, we go beyond existing methods to show how the SPIFF can be used to correct errors. The key is that the SPIFF aggregates statistical information from many single-particle images and localizations that are gathered over time or across an ensemble, and this information augments the single-particle data. We explicitly consider two cases that give rise to significant errors in estimated particle locations: undersampling the pointspreadfunction due to small emitter size and intensity overlap of proximal objects. In these situations, we show how errors in positions can be corrected essentially completely with little added computational cost. Additional situations and applications to experimental data are explored in SI Appendix In the presence of experimental-like shot noise, the precision of the SPIFF-based correction achieves (and can even exceed) the unbiased Cramér-Rao lower bound. We expect the SPIFF approach to be useful in a wide range of localization applications, including single-molecule imaging and particle tracking, in fields ranging from biology to materials science to astronomy.

We present an experiment, well adapted for students of introductory optics courses, for the visualization of the impact of spherical aberration in the pointspreadfunction of imaging systems. The demonstrations are based on the analogy between the point-spreadfunction of spherically aberrated systems, and the defocused patterns of 1D slit-like…

Functional Analysis examines trends in functional analysis as a mathematical discipline and the ever-increasing role played by its techniques in applications. The theory of topological vector spaces is emphasized, along with the applications of functional analysis to applied analysis. Some topics of functional analysis connected with applications to mathematical economics and control theory are also discussed. Comprised of 18 chapters, this book begins with an introduction to the elements of the theory of topological spaces, the theory of metric spaces, and the theory of abstract measure space

Amphipols are amphipathic polymers that stabilize membrane proteins isolated from their native membrane. They have been functionalized with various chemical groups in the past years for protein labeling and protein immobilization. This large toolbox of functionalized amphipols combined...... with their interesting physico-chemical properties give opportunities to selectively add multiple functionalities to membrane proteins and to tune them according to the needs. This unique combination of properties makes them one of the most versatile strategies available today for exploiting membrane proteins onto...... surfaces for various applications in synthetic biology. This review summarizes the properties of functionalized amphipols suitable for synthetic biology approaches....

The best treatment for cataract patients, which allows to restore clear vision is implanting an artificial intraocular lens (IOL). The image quality of the lens has a significant impact on the quality of patient's vision. After a long exposure the implant to aqueous environment some defects appear in the artificial lenses. The defects generated in the IOL have different refractive indices. For example, glistening phenomenon is based on light scattering on the oval microvacuoles filled with an aqueous humor which refractive index value is about 1.34. Calcium deposits are another example of lens defects and they can be characterized by the refractive index 1.63. In the presented studies it was calculated how the difference between the refractive indices of the defect and the refractive index of the lens material affects the quality of image. The OpticStudio Professional program (from Radiant Zemax, LLC) was used for the construction of the numerical model of the eye with IOL and to calculate the characteristics of the retinal image. Retinal image quality was described in such characteristics as PointSpreadFunction (PSF) and the Optical Transfer Function with amplitude and phase. The results show a strong correlation between the refractive indices difference and retinal image quality.

CNES (French spatial agency) will provide the AltiKa high resolution altimeter, Doris instrument and the LRA (Laser Retroreflector Array) for SARAL (Satellite with Argos and AltiKa) in cooperation with ISRO (Indian space agency). The LRA is a passive equipment reflecting the laser beams coming from the Earth ground stations. Computing the send-return time travel of the laser beams allows the determination of the satellite altitude within an accuracy of a few millimeters. The reflective function is done by a set of 9 corner cube reflectors, with a conical arrangement providing a 150 degrees wide field of view over the full 360 degrees azimuth angle. According to CNES optomechanical specifications, the LRA has been developed by SESO (French optical firm). SESO has succeeded in providing the corner cube reflectors with a very stringent dihedral angle error of 1.6 arcsec and an accuracy within +/-0.5 arcsec. During this development, SESO has performed mechanical, thermal and thermo-optical analyses. The optical gradient of each corner cube, as well as angular deviations and PSF (PointSpreadFunction) in each laser range finding direction, have been computed. Mechanical and thermal tests have been successfully performed. A thermo-optical test has successfully confirmed the optical effect of the predicted in-flight thermal gradients. Each reflector is characterized in order to find its best location in the LRA housing and give the maximum optimization to the space telemetering mission.

Results are presented from the latest experiment with a new neutron/gamma detector, a Time-Resolved, Event-Counting Optical Radiation (TRECOR) detector. It is composed of a scintillating fiber-screen converter, bending mirror, lens and Event-Counting Image Intensifier (ECII), capable of specifying the position and time-of-flight of each event. TRECOR is designated for a multipurpose integrated system that will detect Special Nuclear Materials (SNM) and explosives in cargo. Explosives are detected by Fast-Neutron Resonance Radiography, and SNM by Dual Discrete-Energy gamma-Radiography. Neutrons and gamma-rays are both produced in the 11B(d,n+γ)12C reaction. The two detection modes can be implemented simultaneously in TRECOR, using two adjacent radiation converters that share a common optical readout. In the present experiment the neutron detection mode was studied, using a plastic scintillator converter. The measurements were performed at the PTB cyclotron, using the 9Be(d,n) neutron spectrum obtained from a thick Be-target at Ed ~ 13 MeV\\@. The basic characteristics of this detector were investigated, including the Contrast Transfer Function (CTF), PointSpreadFunction (PSF) and elemental discrimination capability.

Full Text Available We report the preparation of hollow spherical polypyrrole balls (HSPBs by two different approaches. In the first approach, core-shell conductive balls, CSCBs, were prepared with poly(styrene as core and polypyrrole (PPy as shell by in situ polymerization of pyrrole in the presence of polystyrene (PS latex particles. In the other approach, CSCBs were obtained by in situ copolymerization of pyrrole in the presence of PS(F with hydrophilic groups like anhydride, boronic acid, carboxylic acid, or sulfonic acid, and then HSPBs were obtained by the removal of PS or PS(F core from CSCBs. TEM images reveal the spherical morphology for HSPBs prepared from PS(F. The conductivity of CSCBs and HSPBs was in the range of 0.20–0.90 S/cm2.

The class 2 region of the human major histocompatibility complex (MHC) may encode several genes controlling the processing of endogenous antigen and the presentation of peptide epitopes by MHC class 1 molecules to cytotoxic T lymphocytes. A previously described peptide supply factor (PSF1) is a member of the multidrug-resistance family of transporters and may pump cytosolic peptides into the membrane-bound compartment where class 1 molecules assemble. A second transporter gene, PSF2, was identified 10 kilobases (kb) from PSF1, near the class 2 DOB gene. The complete sequences of PSF1 and PSF2 were determined from cDNA clones. The translation products are closely related in sequence and predicted secondary structure. Both contain a highly conserved ATP-binding fold and share 25% homology in a hydrophobic domain with a tentative number of eight membrane-spanning segments. Based on the principle dimeric organization of these two domains in other transporters, PSF1 and PSF2 may function as complementary subunits, independently as homodimers, or both. Taken together with previous genetic evidence, the coregulation of PSF1 and PSF2 by γ interferon and the to-some-degree coordinate transcription of these genes suggest a common role in peptide-loading of class 1 molecules, although a distinct function of PSF2 cannot be ruled out

Because chemicals can adversely affect cognitive function in humans, considerable effort has been made to characterize their effects using animal models. Information from such models will be necessary to: evaluate whether chemicals identified as potentially neurotoxic by screenin...

A string-formatting function such as printf in C seemingly requires dependent types, because its control string determines the rest of its arguments. Examples: formula here We show how changing the representation of the control string makes it possible to program printf in ML (which does not allow...... dependent types). The result is well typed and perceptibly more efficient than the corresponding library functions in Standard ML of New Jersey and in Caml....

A string-formatting function such as printf in C seemingly requires dependent types, because its control string determines the rest of its arguments. We show how changing the representation of the control string makes it possible to program printf in ML (which does not allow dependent types......). The result is well typed and perceptibly more efficient than the corresponding library functions in Standard ML of New Jersey and in Caml....

Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spreadfunction, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.

We describe a new method of extracting the spectra of stars from observations of crowded stellar fields with integral field spectroscopy (IFS). Our approach extends the well-established concept of crowded field photometry in images into the domain of 3-dimensional spectroscopic datacubes. The main features of our algorithm follow. (1) We assume that a high-fidelity input source catalogue already exists, e.g. from HST data, and that it is not needed to perform sophisticated source detection in the IFS data. (2) Source positions and properties of the pointspreadfunction (PSF) vary smoothly between spectral layers of the datacube, and these variations can be described by simple fitting functions. (3) The shape of the PSF can be adequately described by an analytical function. Even without isolated PSF calibrator stars we can therefore estimate the PSF by a model fit to the full ensemble of stars visible within the field of view. (4) By using sparse matrices to describe the sources, the problem of extracting the spectra of many stars simultaneously becomes computationally tractable. We present extensive performance and validation tests of our algorithm using realistic simulated datacubes that closely reproduce actual IFS observations of the central regions of Galactic globular clusters. We investigate the quality of the extracted spectra under the effects of crowding with respect to the resulting signal-to-noise ratios (S/N) and any possible changes in the continuum level, as well as with respect to absorption line spectral parameters, radial velocities, and equivalent widths. The main effect of blending between two nearby stars is a decrease in the S/N in their spectra. The effect increases with the crowding in the field in a way that the maximum number of stars with useful spectra is always ~0.2 per spatial resolution element. This balance breaks down when exceeding a total source density of one significantly detected star per resolution element. We also explore the

The Extreme UltraViolet Imager (EUVI) telescope on board the Solar TErrestrial RElations Observatory (STEREO) mission provides extreme-ultraviolet (EUV) coronal images of the full Sun. Using time series of EUV images, the differential emission measure tomography (DEMT) technique allows the determination of the three-dimensional (3D) distribution of the coronal electron density and temperature in the inner corona. EUV images are affected by stray light contamination which can be effectively removed if the point-spreadfunction (PSF) of the instrument is well determined, as it is the case for EUVI. We show the results of a detailed analysis of the impact of EUVI stray light removal in DEMT results. To this end we analyze Carrington Rotation (CR)-2081 during the last solar minimum, characterized by a highly axisymmetric coronal structure. We find that stray light decontamination of EUVI images implies a systematic decrease of the derived electron density scale height and a systematic increase of the derived coronal base density, while its effect on the derived temperature is not systematic neither significant. We detail the results of the analysis in quantitative fashion.

We present a new method of DEEM, the direct energy encircling method, for characterising the performance of fibres in most astronomical spectroscopic applications. It's a versatile platform to measure focal ratio degradation (FRD), throughput, and pointspreadfunction (PSF). The principle of DEEM and the relation between the encircled energy (EE) and the spot size were derived and simulated based on the power distribution model (PDM). We analysed the errors of DEEM and pointed out the major error source for better understanding and optimisation. The validation of DEEM has been confirmed by comparing the results with conventional method which shows that DEEM has good robustness with high accuracy in both stable and complex experiment environments. Applications on the integral field unit (IFU) show that the FRD of 50μm core fibre is substandard for the requirement which requires the output focal ratio to be slower than 4.5. The homogeneity of throughput is acceptable and higher than 85%. The prototype IFU of the first generation helps to find out the imperfections to optimise the new design of the next generation based on the staggered structure with 35μm core fibres of N.A.=0.12, which can improve the FRD performance. The FRD dependence on wavelength and core size is revealed that higher output focal ratio occurs at shorter wavelengths for large core fibres, which is in agreement with the prediction of PDM. But the dependence of the observed data is weaker than the prediction.

A design is proposed for a 20 m Canadian Very Large Optical Telescope (VLOT). This design meets the science, schedule, and availability requirements of the Canadian astronomical community. The telescope could be operational by early in the next decade to complement the science discoveries of the Next Generation Space Telescope (NGST) and Atacama Large Millimeter Array (ALMA). This design is suitable for location on the Mauna Kea summit ridge, and could replace the current 3.6 m CFHT telescope. The telescope structure provides room for two vertically oriented Nasmyth instruments, implements a very stiff monocoque mirror cell, and offers a short and direct load path to the telescope mount. A Calotte style dome structure offers many advantages over current designs including lower and more even power requirements, and a circular aperture that will better protect the telescope structure from wind buffeting. The science requirements are presented, and the telescope optical design, primary mirror pupil segmentation options, including hexagonal segments and a radial segment design with a central 8 m mirror, are considered. Pointspreadfunction plots and encircled energy calculations show that there is no significant diffraction performance difference between the options except that hexagonal segments in the 1 m point-to-point range appear to deliver poorer PSF's as compared to 2 m and larger segments. Plans for implementation of a Matlab based integrated telescope model are discussed. A summary of adaptive optics system issues for large telescopes is presented along with plans for future research in AO.

We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of pointspreadfunctions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.

A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm 2 area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm 3 . The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10 4 Bq source activities with equal efficiency and is completely saturated at 10 9 Bq. The efficiency of the system is evaluated using a simulated 18 F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the pointspreadfunction (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8.

In recent years, there have been multiple advances in positron emission tomography/computed tomography (PET/CT) that improve cancer imaging. The present generation of PET/CT scanners introduces new hardware, software, and acquisition methods. This review describes these new developments, which include time-of-flight (TOF), point-spread-function (PSF), maximum-a-posteriori (MAP) based reconstruction, smaller voxels, respiratory gating, metal artefact reduction, and administration of quadratic weight-dependent 18 F-fluorodeoxyglucose (FDG) activity. Also, hardware developments such as continuous bed motion (CBM), (digital) solid-state photodetectors and combined PET and magnetic resonance (MR) systems are explained. These novel techniques have a significant impact on cancer imaging, as they result in better image quality, improved small lesion detectability, and more accurate quantification of radiopharmaceutical uptake. This influences cancer diagnosis and staging, as well as therapy response monitoring and radiotherapy planning. Finally, the possible impact of these developments on the European Association of Nuclear Medicine (EANM) guidelines and EANM Research Ltd. (EARL) accreditation for FDG-PET/CT tumor imaging is discussed. (orig.)

In recent years, there have been multiple advances in positron emission tomography/computed tomography (PET/CT) that improve cancer imaging. The present generation of PET/CT scanners introduces new hardware, software, and acquisition methods. This review describes these new developments, which include time-of-flight (TOF), point-spread-function (PSF), maximum-a-posteriori (MAP) based reconstruction, smaller voxels, respiratory gating, metal artefact reduction, and administration of quadratic weight-dependent {sup 18}F-fluorodeoxyglucose (FDG) activity. Also, hardware developments such as continuous bed motion (CBM), (digital) solid-state photodetectors and combined PET and magnetic resonance (MR) systems are explained. These novel techniques have a significant impact on cancer imaging, as they result in better image quality, improved small lesion detectability, and more accurate quantification of radiopharmaceutical uptake. This influences cancer diagnosis and staging, as well as therapy response monitoring and radiotherapy planning. Finally, the possible impact of these developments on the European Association of Nuclear Medicine (EANM) guidelines and EANM Research Ltd. (EARL) accreditation for FDG-PET/CT tumor imaging is discussed. (orig.)

In modern ultrasound imaging devices, two-dimensional probes and electronic scanning allow volumetric imaging of anatomical structures. When dealing with the design of such complex 3-D ultrasound (US) systems, as the number of transducers and channels dramatically increases, new challenges concerning the integration of electronics and the implementation of smart micro-beamforming strategies arise. Hence, the possibility to predict the behavior of the whole system is mandatory. In this paper, we propose and describe an advanced simulation tool for ultrasound system modeling and simulation, which conjugates the US propagation and scattering, signal transduction, electronic signal conditioning, and beamforming in a single environment. In particular, we present the architecture and model of an existing 16-channel integrated receiver, which includes an amplification and micro-beamforming stage, and validate it by comparison with circuit simulations. The developed model is then used in conjunction with the transducer and US field models to perform a system simulation, aimed at estimating the performance of an example 3-D US imaging system that uses a capacitive micromachined ultrasonic transducer (CMUT) 2-D phased-array coupled to the modeled reception front-end. Results of pointspreadfunction (PSF) calculations, as well as synthetic imaging of a virtual phantom, show that this tool is actually able to model the complete US image reconstruction process, and that it could be used to quickly provide valuable system-level feedback for an optimized tuning of electronic design parameters.

A 3D ultrasound image is desired in many medical examinations. However, the implementation of a 2D array, which is needed for a 3D image, is challenging with respect to fabrication, interconnection and cabling. A 2D sparse array, which needs fewer elements than a dense array, is a realistic way to achieve 3D images. Because the number of ways the elements can be placed in an array is extremely large, a method for optimizing the array configuration is needed. Previous research placed the target point far from the transducer array, making it impossible to optimize the array in the operating range. In our study, we focused on optimizing a 2D sparse array transducer for 3D imaging by using a simulated annealing method. We compared the far-field optimization method with the near-field optimization method by analyzing a point-spreadfunction (PSF). The resolution of the optimized sparse array is comparable to that of the dense array.

Full Text Available Super-resolution fluorescence microscopy has become a powerful tool to resolve structural information that is not accessible to traditional diffraction-limited imaging techniques such as confocal microscopy. Stochastic optical reconstruction microscopy (STORM and photoactivation localization microscopy (PALM are promising super-resolution techniques due to their relative ease of implementation and instrumentation on standard microscopes. However, the application of STORM is critically limited by its long sampling time. Several recent works have been focused on improving the STORM imaging speed by making use of the information from emitters with overlapping pointspreadfunctions (PSF. In this work, we present a fast and efficient algorithm that takes into account the blinking statistics of independent fluorescence emitters. We achieve sub-diffraction lateral resolution of 100 nm from 5 to 7 seconds of imaging. Our method is insensitive to background and can be applied to different types of fluorescence sources, including but not limited to the organic dyes and quantum dots that we demonstrate in this work.

Background and purpose: A dose compensation method is presented for patients with hip prosthesis based on Dynamic Multi Leaves Collimator (DMLC) planning. Calculations are done from an exit Portal Dose Image (PDI) from 6 MV Photon beam using an Electronic Portal Imaging Device (EPID) from Varian. Four different hip prostheses are used for this work. Methods: From an exit PDI the fluence needed to yield a uniform dose distribution behind the prosthesis is calculated. To back-project the dose distribution through the phantom, the lateral scatter is removed by deconvolution with a pointspreadfunction (PSF) determined for depths from 10 to 40 cm. The dose maximum, D max , is determined from the primary plan which delivers the PDI. A further deconvolution to remove the dose glare effect in the EPID is performed as well. Additionally, this calculated fluence distribution is imported into the Treatment Planning System (TPS) for the final calculation of a DMLC plan. The fluence file contains information such as the relative central axis (CAX) position, grid size and fluence size needed for correct delivery of the DMLC plan. GafChromic EBT films positioned at 10 cm depth are used as verification of uniform dose distributions behind the prostheses. As the prosthesis is positioned at the phantom surface the dose verifications are done 10 cm from the prosthesis. Conclusion: The film measurement with 6 MV photon beam shows uniform doses within 5% for most points, but with hot/cold spots of 10% near the femoral head prostheses

The Sandia Strehl Calculator is designed to calculate the Gibson and Lanni pointspreadfunction (PSF), Strehl ratio, and ensquared energy, allowing non-design immersion, coverslip, and sample layers. It also uses Abbe number calculations to determine the refractive index at specific wavelengths when given the refractive index at a different wavelength and the dispersion. The primary application of Sandia Strehl Calculator is to determine the theoretical impacts of using an optical microscope beyond its normal design parameters. Examples of non-design microscope usage include: a) using coverslips of non-design material b) coverslips of different thicknesses c) imaging deep into an aqueous sample with an immersion objective d) imaging a sample at 37 degrees. All of these changes can affect the imaging quality, sometimes profoundly, but are at the same time non-design conditions employed not infrequently. Rather than having to experimentally determine whether the changes will result in unacceptable image quality, Sandia Strehl Calculator uses existing optical theory to determine the approximate effect of the change, saving the need to perform experiments.

Full Text Available The performance of a super-resolution (SR reconstruction method on real-world data is not easy to measure, especially as a ground-truth (GT is often not available. In this paper, a quantitative performance measure is used, based on triangle orientation discrimination (TOD. The TOD measure, simulating a real-observer task, is capable of determining the performance of a specific SR reconstruction method under varying conditions of the input data. It is shown that the performance of an SR reconstruction method on real-world data can be predicted accurately by measuring its performance on simulated data. This prediction of the performance on real-world data enables the optimization of the complete chain of a vision system; from camera setup and SR reconstruction up to image detection/recognition/identification. Furthermore, different SR reconstruction methods are compared to show that the TOD method is a useful tool to select a specific SR reconstruction method according to the imaging conditions (camera's fill-factor, optical point-spread-function (PSF, signal-to-noise ratio (SNR.

Full Text Available We report on two alternative simple methods to detect counterparts of cosmic gamma-ray bursts (GRBs and optical transients (OTs. We report on the development and tests of an alternative optical all-sky monitor recently tested at the Karlovy Vary Observatory. The monitor is based on a Peleng 8 mm fish-eye lens (1 : 3,5–1 : 16 and CANON EOS 350D digital CCD camera. This type of monitor represents a low-cost device suitable for easy replication and still able to detect brighter optical transients simultaneously to GRB triggers. Such OTs have been observed for some of the GRBs such as GRB990123, GRB060117, or recently GRB080319 indicating that some fraction of GRBs can generate optical transient emission accessible by simple small aperture instrumentation as described here. These efforts are accompanied by development of dedicated programmes to access and to evaluate all-sky images; these efforts will be also briefly described. The All-Sky Monitor is a space variant optical system and its pointspreadfunction (PSF has not uniform shape in the field of view. The processing and measuring of image data is complicated, and sophisticated deconvolution algorithms are used for image restoration. The second method is the GRB detection based on their ionospheric response.

Full Text Available The adjacency effect and non-uniform responses complicate the precise delimitation of the surface support of remote sensing data and their derived products. Thus, modeling spatial response characteristics (SRCs prior to using remote sensing information has become important. A pointspreadfunction (PSF is typically used to describe the SRCs of the observation cells from remote sensors and is always estimated in a laboratory before the sensor is launched. However, research on the SRCs of high-order remote sensing products derived from the observations remains insufficient, which is an obstacle to converting between multi-scale remote sensing products and validating coarse-resolution products. This study proposed a method that combines simulation and validation to establish SRC models of coarse-resolution albedo products. Two series of commonly used 500-m/1-km resolution albedo products, which are derived from Moderate Resolution Imaging Spectroradiometer (MODIS reflectance data, were investigated using 30-m albedo products that provide the required sub-pixel information. The analysis proves that the size of the surface support of each albedo pixel is larger than the nominal resolution of the pixel and that the response weight is non-uniformly distributed, with an elliptical Gaussian shape. The proposed methodology is generic and applicable for analyzing the SRCs of other advanced remote sensing products.

Over the last two decades there have been a growing number of designs for positron emission tomography (PET) cameras optimized to image the breast. These devices, commonly known as positron emission mammography (PEM) cameras allow much more spatial resolution by putting the photon detectors directly on the breast. PEM cameras have a compact geometry with a restricted field of view (FOV) thus exhibiting higher performance and lower cost than large whole body PET scanners. Typical PEM designs are based on scintillators such as bismuth germanate (BGO), lutetium oxorthosilicate (LSO) or lutetium yttrium orthosicilate (LYSO), and characterized by large parallax error due to deficiency of the depth of interaction (DOI) information from crystals. In the case of parallel geometry PEM, large parallax error results in poor image resolution along the vertical axis. In the framework of the Voxel Imaging PET (VIP) pathfinder project, we propose a high resolution PEM scanner based on pixelated solid-state CdTe detectors. The pixel PEM device with a millimeter-size pixel pitch provides an excellent spatial resolution in all directions 8 times better than standard commercial devices with a pointspreadfunction (PSF) of 1 mm full width at half maximum (FWHM) and excellent energy resolution of down to 1.6% FWHM at 511 keV photons at room temperature. The system is capable to detect down to 1 mm diameter hot spheres in warm background.

Integral field spectroscopy has become an invaluable tool for investigating the physical conditions and dynamics deep inside galaxy nuclei. The integral field spectrograph on JWST provides some crucial advantages over those on AO- assisted ground-based telescopes like Gemini and VLT. In particular, JWST will provide a stable and diffraction limited pointspreadfunction (PSF) with no seeing halo, and the background will be significantly reduced resulting in shorter exposure times to achieve a benchmark signal-to-noise ratio, even for late-type galaxies that have shallower central cusps and fainter central surface brightnesses, and for which the exposure times required from the ground may be prohibitive. We are particularly interested in comparing black hole masses derived from the modeling of nuclear stellar dynamics to masses derived from reverberation mapping in the same galaxies. With this Early Release Science proposal, we request a small investment of time to clearly demonstrate JWST's capabilities in spatial and spectral resolution relative to the stringent technical requirements for direct black hole mass measurements. The technically demanding nature of the requisite measurements will allow us to explore the limits of what is possible to achieve with the NIRSpec IFU, thus providing technical guidance for a wide range of studies that seek to probe the physics of black hole feeding and feedback and their links to galaxy and black hole co-evolution.

To understand the effects of inhomogeneous drying on the quality of polymer coatings, an experimental setup to resolve the occurring flow field throughout the drying film has been developed. Deconvolution microscopy is used to analyze the flow field in 3D and time. Since the dimension of the spatial component in the direction of the line-of-sight is limited compared to the lateral components, a multi-focal approach is used. Here, the beam of light is equally distributed on up to five cameras using cubic beam splitters. Adding a meniscus lens between each pair of camera and beam splitter and setting different distances between each camera and its meniscus lens creates multi-focality and allows one to increase the depth of the observed volume. Resolving the spatial component in the line-of-sight direction is based on analyzing the pointspreadfunction. The analysis of the PSF is computational expensive and introduces a high complexity compared to traditional particle image velocimetry approaches. A new algorithm tailored to the parallel computing architecture of recent graphics processing units has been developed. The algorithm is able to process typical images in less than a second and has further potential to realize online analysis in the future. As a prove of principle, the flow fields occurring in thin polymer solutions drying at ambient conditions and at boundary conditions that force inhomogeneous drying are presented.

We present 11.1 to 37.1 μ m imaging observations of the very dense molecular cloud core MM1 in G034.43+00.24 using FORCAST on SOFIA and submillimeter (submm) polarimetry using SHARP on the Caltech Submillimeter Observatory. We find that at the spatial resolution of SOFIA, the point-spreadfunction (PSF) of MM1 is consistent with being a single source, as expected based on millimeter (mm) and submm observations. The spectral energy distributions (SEDs) of MM1 and MM2 have a warm component at the shorter wavelengths not seen in mm and submm SEDs. Examination of H(1.65 μ m) stellar polarimetry from the Galactic Plane Infrared Polarization Survey shows that G034 is embedded in an external magnetic field aligned with the Galactic Plane. The SHARP polarimetry at 450 μ m shows a magnetic field geometry in the vicinity of MM1 that does not line up with either the Galactic Plane or the mean field direction inferred from the CARMA interferometric polarization map of the central cloud core, but is perpendicular to the long filament in which G034 is embedded. The CARMA polarimetry does show evidence for grain alignment in the central region of the cloud core, and thus does trace the magnetic field geometry near the embedded Class 0 YSO.

Coded aperture X-ray computed tomography (CT) has the potential to revolutionize X-ray tomography systems in medical imaging and air and rail transit security - both areas of global importance. It allows either a reduced set of measurements in X-ray CT without degradation in image reconstruction, or measure multiplexed X-rays to simplify the sensing geometry. Measurement reduction is of particular interest in medical imaging to reduce radiation, and airport security often imposes practical constraints leading to limited angle geometries. Coded aperture compressive X-ray CT places a coded aperture pattern in front of the X-ray source in order to obtain patterned projections onto a detector. Compressive sensing (CS) reconstruction algorithms are then used to recover the image. To date, the coded illumination patterns used in conventional CT systems have been random. This paper addresses the code optimization problem for general tomography imaging based on the pointspreadfunction (PSF) of the system, which is used as a measure of the sensing matrix quality which connects to the restricted isometry property (RIP) and coherence of the sensing matrix. The methods presented are general, simple to use, and can be easily extended to other imaging systems. Simulations are presented where the peak signal to noise ratios (PSNR) of the reconstructed images using optimized coded apertures exhibit significant gain over those attained by random coded apertures. Additionally, results using real X-ray tomography projections are presented.

In recent years, there have been multiple advances in positron emission tomography/computed tomography (PET/CT) that improve cancer imaging. The present generation of PET/CT scanners introduces new hardware, software, and acquisition methods. This review describes these new developments, which include time-of-flight (TOF), point-spread-function (PSF), maximum-a-posteriori (MAP) based reconstruction, smaller voxels, respiratory gating, metal artefact reduction, and administration of quadratic weight-dependent 18 F-fluorodeoxyglucose (FDG) activity. Also, hardware developments such as continuous bed motion (CBM), (digital) solid-state photodetectors and combined PET and magnetic resonance (MR) systems are explained. These novel techniques have a significant impact on cancer imaging, as they result in better image quality, improved small lesion detectability, and more accurate quantification of radiopharmaceutical uptake. This influences cancer diagnosis and staging, as well as therapy response monitoring and radiotherapy planning. Finally, the possible impact of these developments on the European Association of Nuclear Medicine (EANM) guidelines and EANM Research Ltd. (EARL) accreditation for FDG-PET/CT tumor imaging is discussed.

In SPECT small animal imaging, it is highly recommended to accurately model the response of the detector in order to improve the low spatial resolution. The volume to reconstruct is thus obtained both by back-projecting and de-convolving the projections. We chose iterative methods, which permit one to solve the inverse problem independently from the model's complexity. We describe in this work a Gaussian model of pointspreadfunction (PSF) whose position, width and maximum are computed according to physical and geometrical parameters. Then we use the rotation symmetry to replace the computation of P projection operators, each one corresponding to one position of the detector around the object, by the computation of only one of them. This is achieved by choosing an appropriate polar discretization, for which we control the angular density of voxels to avoid over-sampling the center of the field of view. Finally, we propose a new family of algorithms, the so-called frequency adapted algorithms, which enable to optimize the reconstruction of a given band in the frequency domain on both the speed of convergence and the quality of the image. (author)

Recent interest in hybrid RF/Optical communications has led to the development and installation of a "polished-panel" optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Communications Complex. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spreadfunction (PSF) generated by the polished aluminum panel. Extensive data has been obtained via realtime tracking and imaging of planets and stars at DSS-13. Both "on-source" and "off-source" data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats with photon-counting detection, and compared with the ultimate quantum bound on detection performance for these modulations. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.

We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s pointspreadfunction (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels. (paper)

The Sloan Digital Sky Survey has validated and made publicly available its Second Data Release. This data release consists of 3324 square degrees of five-band (u g r i z) imaging data with photometry for over 88 million unique objects, 367,360 spectra of galaxies, quasars, stars and calibrating blank sky patches selected over 2627 degrees of this area, and tables of measured parameters from these data. The imaging data reach a depth of r ~ 22.2 (95% completeness limit for point sources) and are photometrically and astrometrically calibrated to 2% rms and 100 milli-arcsec rms per coordinate, respectively. The imaging data have all been processed through a new version of the SDSS imaging pipeline, in which the most important improvement since the last data release is fixing an error in the model fits to each object. The result is that model magnitudes are now a good proxy for pointspreadfunction (PSF) magnitudes for point sources, and Petrosian magnitudes for extended sources. The spectroscopy extends from 38...

The MYTHEN detector is a one-dimensional microstrip detector with single photon counting readout optimized for time resolved powder diffraction experiments at the Swiss Light Source (SLS). The system has been successfully tested for many different synchrotron radiation applications including phase contrast and tomographic imaging, small angle scattering, diffraction and time resolved pump and probe experiments for X-ray energies down to 5 keV and counting rate up to 3 MHz. The frontend electronics is designed in order to be coupled to 50 μm pitch microstrip sensors but some interest in enhancing the spatial resolution is arising for imaging and powder diffraction experiments. A test structure with strip pitches in the range 10-50 μm has been tested and the gain and noise on the readout electronics have been measured for the different strip pitches, observing no large difference down to 25 μm. Moreover, the effect of the charge sharing between neighboring strips on the spatial resolution has been quantified by measuring the PointSpreadFunction (PSF) of the system for the different pitches.

The potential development of large aperture ground-based "photon bucket" optical receivers for deep space communications has received considerable attention recently. One approach currently under investigation proposes to polish the aluminum reflector panels of 34-meter microwave antennas to high reflectance, and accept the relatively large spotsize generated by even state-of-the-art polished aluminum panels. Here we describe the experimental effort currently underway at the Deep Space Network (DSN) Goldstone Communications Complex in California, to test and verify these concepts in a realistic operational environment. A custom designed aluminum panel has been mounted on the 34 meter research antenna at Deep-Space Station 13 (DSS-13), and a remotely controlled CCD camera with a large CCD sensor in a weather-proof container has been installed next to the subreflector, pointed directly at the custom polished panel. Using the planet Jupiter as the optical point-source, the point-spreadfunction (PSF) generated by the polished panel has been characterized, the array data processed to determine the center of the intensity distribution, and expected communications performance of the proposed polished panel optical receiver has been evaluated.

Full Text Available Suppose N is a Banach space of norm |•| and R is the set of real numbers. All integrals used are of the subdivision-refinement type. The main theorem [Theorem 3] gives a representation of TH where H is a function from R×R to N such that H(p+,p+, H(p,p+, H(p−,p−, and H(p−,p each exist for each p and T is a bounded linear operator on the space of all such functions H. In particular we show that TH=(I∫abfHdα+∑i=1∞[H(xi−1,xi−1+−H(xi−1+,xi−1+]β(xi−1+∑i=1∞[H(xi−,xi−H(xi−,xi−]Θ(xi−1,xiwhere each of α, β, and Θ depend only on T, α is of bounded variation, β and Θ are 0 except at a countable number of points, fH is a function from R to N depending on H and {xi}i=1∞ denotes the points P in [a,b]. for which [H(p,p+−H(p+,p+]≠0 or [H(p−,p−H(p−,p−]≠0. We also define an interior interval function integral and give a relationship between it and the standard interval function integral.

We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of pointspreadfunctions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced

The ambitious science goals of the Large Synoptic Survey Telescope (LSST) will be achieved in part by a wide-field imager that will achieve a new level of performance in terms of area, speed, and sensitivity. The instrument performance is dominated by the focal plane sensors, which are now in development. These new-generation sensors will make use of advanced semiconductor technology and will be complemented by a highly integrated electronics package located inside the cryostat. A test laboratory has been set up at Brookhaven National Laboratory (BNL) to characterize prototype sensors and to develop test and assembly techniques for eventual integration of production sensors and electronics into modules that will form the final focal plane. As described in [1], the key requirements for LSST sensors are wideband quantum efficiency (QE) extending beyond lpm in the red, control of pointspreadfunction (PSF), and fast readout using multiple amplifiers per chip operated in parallel. In addition, LSST's fast optical system (f71.25) places severe constraints on focal plane flatness. At the chip level this involves packaging techniques to minimize warpage of the silicon die, and at the mosaic level careful assembly and metrology to achieve a high coplanarity of the sensor tiles. In view of the long lead time to develop the needed sensor technology, LSST undertook a study program with several vendors to fabricate and test devices which address the most critical performance features [2]. The remainder of this paper presents key results of this study program. Section 2 summarizes the sensor requirements and the results of design optimization studies, and Section 3 presents the sensor development plan. In Section 4 we describe the test bench at BNL. Section 5 reports measurement results obtained to date oh devices fabricated by several vendors. Section 6 presents a summary of the paper and an outlook for the future work. We present characterization methods and results on

Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spreadfunction (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

Dark energy is one of the most important unsolved problems in modern Physics, and weak gravitational lensing (WL) by mass structures along the line of sight ("cosmic shear") is a promising technique to learn more about its nature. However, WL is subject to numerous systematic errors which induce biases in measured cosmological parameters and prevent the development of its full potential. In this thesis, we advance the understanding of WL systematics in the context of the Dark Energy Survey (DES). We develop a testing suite to assess the performance of the shapelet-based DES WL measurement pipeline. We determine that the measurement bias of the parameters of our PointSpreadFunction (PSF) model scales as (S/N )-2, implying that a PSF S/N > 75 is needed to satisfy DES requirements. PSF anisotropy suppression also satisfies the requirements for source galaxies with S/N ≳ 45. For low-noise, marginally-resolved exponential galaxies, the shear calibration errors are up to about 0.06% (for shear values ≲ 0.075). Galaxies with S/N ≳ 75 present about 1% errors, sufficient for first-year DES data. However, more work is needed to satisfy full-area DES requirements, especially in the high-noise regime. We then implement tests to validate the high accuracy of the map between pixel coordinates and sky coordinates (astrometric solution), which is crucial to detect the required number of galaxies for WL in stacked images. We also study the effect of atmospheric dispersion on cosmic shear experiments such as DES and the Large Synoptic Survey Telescope (LSST) in the four griz bands. For DES (LSST), we find systematics in the g and r (g, r, and i) bands that are larger than required. We find that a simple linear correction in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r ( i) band for DES (LSST). More complex corrections will likely reduce the systematic cosmic-shear errors below statistical errors for LSST r band

ETCC with a gas Time Projection Chamber (TPC) and pixel GSO scintillators, by measuring electron tracks precisely, provides both a strong background rejection by dE/dx of the track and well-defined 2-dimensional PointSpreadFunction (PDF) with better than several degrees by adding the arc direction of incident gammas (SPD: Scatter Plane Deviation) with the ARM (angular Resolution Measure) direction measured in standard Compton Camera (CC). In 2006 its background rejection was revealed by SMILE-I balloon experiment with 10cm-cubic ETCC using the dE/dx of tracks. In 2013, 30cm-cube-ETCC has been developed to catch gammas from Crab in next SMILE-II balloon with >5sigma detection for 4 hrs. Now its sensitivity has been improved to 10sigma by attaining the angular resolution of the track (SPD angle) to that determined by multiple scattering of the gas. Thus, we show the ability of ETCC to give a better significance by a factor of 10 than that of standard CCs having same detection area by electron tracking?and we have found that SPD is an essential to define the PSF of Compton imaging quantitatively. Such a well-defined PSF is, for the first time, able to provide reliable sensitivity in Compton imaging without assuming the use of optimization algorithm. These studies uncover the uncertainties of CCs from both points of view of the intense background and the difficulty of the definition of the PSF, and overcome the above problems. Based on this technology, SMILE-II with 3atm CF4 gas is expected to provide a 5times better sensitivity than COMPTEL in one month balloon, and 4modules of 50cm-cube ETCCs would exceed over 10^-12 erg/cm^2s^1 (1mCrab) in satellite. Here we summarize the performance of the ETCC and new astrophysics opened in near future by high sensitive observation of MeV gamma-rays.

Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spreadfunction (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is

PET is an established modality for myocardial perfusion imaging (MPI) which enables quantification of absolute myocardial blood flow (MBF) using dynamic imaging and kinetic modeling. However, heart motion and partial volume effects (PVE) significantly limit the spatial resolution and quantitative accuracy of PET MPI. Simultaneous PET-MR offers a solution to the motion problem in PET by enabling MR-based motion correction of PET data. The aim of this study was to develop a motion and PVE correction methodology for PET MPI using simultaneous PET-MR, and to assess its impact on both static and dynamic PET MPI using 18 F-Flurpiridaz, a novel 18 F-labeled perfusion tracer. Two dynamic 18 F-Flurpiridaz MPI scans were performed on healthy pigs using a PET-MR scanner. Cardiac motion was tracked using a dedicated tagged-MRI (tMR) sequence. Motion fields were estimated using non-rigid registration of tMR images and used to calculate motion-dependent attenuation maps. Motion correction of PET data was achieved by incorporating tMR-based motion fields and motion-dependent attenuation coefficients into image reconstruction. Dynamic and static PET datasets were created for each scan. Each dataset was reconstructed as (i) Ungated, (ii) Gated (end-diastolic phase), and (iii) Motion-Corrected (MoCo), each without and with pointspreadfunction (PSF) modeling for PVE correction. Myocardium-to-blood concentration ratios (MBR) and apparent wall thickness were calculated to assess image quality for static MPI. For dynamic MPI, segment- and voxel-wise MBF values were estimated by non-linear fitting of a 2-tissue compartment model to tissue time-activity-curves. MoCo and Gating respectively decreased mean apparent wall thickness by 15.1% and 14.4% and increased MBR by 20.3% and 13.6% compared to Ungated images (P PET, mean MBF across all segments were comparable for MoCo (0.72 ± 0.21 ml/min/ml) and Gating (0.69 ± 0.18 ml/min/ml). Ungated data yielded

After presenting the theory in engineers' language without the unfriendly abstraction of pure mathematics, several illustrative examples are discussed in great detail to see how the various functions of the Bessel family enter into the solution of technically important problems. Axisymmetric vibrations of a circular membrane, oscillations of a uniform chain, heat transfer in circular fins, buckling of columns of varying cross-section, vibrations of a circular plate and current density in a conductor of circular cross-section are considered. The problems are formulated purely from physical considerations (using, for example, Newton's law of motion, Fourier's law of heat conduction electromagnetic field equations, etc.) Infinite series expansions, recurrence relations, manipulation of expressions involving Bessel functions, orthogonality and expansion in Fourier-Bessel series are also covered in some detail. Some important topics such as asymptotic expansions, generating function and Sturm-Lioville theory are r...

This book, immediately striking for its conciseness, is one of the most remarkable works ever produced on the subject of algebraic functions and their integrals. The distinguishing feature of the book is its third chapter, on rational functions, which gives an extremely brief and clear account of the theory of divisors.... A very readable account is given of the topology of Riemann surfaces and of the general properties of abelian integrals. Abel's theorem is presented, with some simple applications. The inversion problem is studied for the cases of genus zero and genus unity. The chapter on t

Eight male subjects in each of three age groups (21-26, 40-45, 60-72 years) slept in pairs in the CAMI sonic boom simulation facility for 21 consecutive nights. The first five nights were used to acclimate the subjects (nights 1 and 2) and to obtain ...

Full Text Available Questions about death have sparked great interest in different areas of society such as media, religion, health, economy, demography and family. Currently, death is a model for thinking about human existence and the impacts that this phenomenon causes in people's lives, as it is the only certainty in life. When it comes to old age, death is highlighted, as there is a close relationship between seniors and death in society. However, at this moment, such arguments do not hold up as before, since the relation now existing has been relative and quite distinct. In order to demonstrate these concepts, this paper aims to address the relationship between death and aging using the experiences of three elderly women whose the recognition of death has gaineddistinct contours. The research was conducted in 2011 at a basic family health center in the municipality of Bayeux (PB. The methodology adopted was the content analysis from the reports of experiences of the elderly women and from the interviews with them. Research has shown that, despite the three women share the condition of old age,the representation and the meanings about death undergo different moments, depending on the life history and the experiences of subjects. The various moments emphasized by the elderly women show that death can be beneficial, seen as a good event, that death also promotes a new meaning to life and, finally, that the current management of active aging has an element for the denial and total removal of death.

In pre-clinical applications, it is quite important to preserve the image resolution because it is necessary to show the details of structures of small animals. Therefore, small animal PET scanners require high spatial resolution and good sensitivity. For the quad-HIDAC PET scanner, which has virtually continuous spatial sampling; improvements in resolution, noise and contrast are obtained as a result of avoiding artifacts introduced by binning the data into sampled projections used during the reconstruction process. In order to reconstruct high-resolution images in 3D-PET, background correction and resolution recovery are included within the Maximum Likelihood list-mode Expectation Maximization reconstruction model. This paper, introduces the performance analysis of the Gaussian, Laplacian and Triangular kernels. The Full-Width Half-Maximum used for each kernel was varied from 0.8 to 1.6 mm. For each quality compartment within the phantom, transaxial middle slices from the 3D reconstructed images are shown. Results show that, according to the quantitative measures, the triangular kernel has the best performance.

term saving of money are the reasons why developing countries should be investing in functional neurosurgery units. Surgery for medical refractory epilepsy can save large amounts of money in the long run if one considers the cost of second and thirdline antiepileptic drugs and the associated morbidity of uncontrolled ...

Functional hyposplenism is a condition accompanying many diseases such as sickle cell disease, celiac disease, alcoholic liver disease, hepatic cirrhosis, lymphomas and autoimmune disorders. It is characterised mostly by defective immune responses against infectious agents, especially encapsulated organisms, since the spleen is thought to play an important role in the production and maturation of B-memory lymphocytes and other substances like opsonins, both of which are considered crucial elements of the immune system for fighting infections. It is also associated with thrombocytosis, which might lead to thromboembolic events. Functional hyposplenism is diagnosed by the presence of Howell-Jolly bodies and pitted erythrocytes in the peripheral blood smear, and by nuclear imaging modalities such as spleen scintigraphy with the use of Technetium-99m and/or spleen scintigraphy with the use of heat-damaged Technetium-99m labeled erythrocytes. Severe infections accompanying functional hyposplenism can lead to the overwhelming post infection syndrome, which can often be fatal. Identifying patients with functional hyposplenism is important because simple measures such as vaccination against common infective microorganisms (e.g. Streptococcus pneumonia, Neisseria meningitides and Haemophilous influenzae) and antibiotic therapy when needed are considered beneficial in diminishing the frequency and gravity of the infections accompanying the syndrome.

Purpose of review Functional dyspepsia is a common disorder, most of the time of unknown etiology and with variable pathophysiology. Therapy has been and still is largely empirical. Data from recent studies provide new clues for targeted therapy based on knowledge of etiology and pathophysiologic

Full Text Available A functional credential allows a user to anonymously prove possession of a set of attributes that fulfills a certain policy. The policies are arbitrary polynomially computable predicates that are evaluated over arbitrary attributes. The key feature of this primitive is the delegation of verification to third parties, called designated verifiers. The delegation protects the privacy of the policy: A designated verifier can verify that a user satisfies a certain policy without learning anything about the policy itself. We illustrate the usefulness of this property in different applications, including outsourced databases with access control. We present a new framework to construct functional credentials that does not require (non-interactive zero-knowledge proofs. This is important in settings where the statements are complex and thus the resulting zero-knowledge proofs are not efficient. Our construction is based on any predicate encryption scheme and the security relies on standard assumptions. A complexity analysis and an experimental evaluation confirm the practicality of our approach.

The term lung function is often restricted to the assessment of volume time curves measured at the mouth. Spirometry includes the assessment of lung volumes which can be mobilised with the corresponding flow-volume curves. In addition, lung volumes that can not be mobilised, such as the residual volume, or only partially as FRC and TLC can be measured by body plethysmography combined with the determination of the airway resistance. Body plethysmography allows the correct positioning of forced breathing manoeuvres on the volume-axis, e.g. before and after pharmacotherapy. Adding the CO single breath transfer factor (T LCO ), which includes the measurement of the ventilated lung volume using He, enables a clear diagnosis of different obstructive, restrictive or mixed ventilatory defects with and without trapped air. Tests of reversibility and provocation, as well as the assessment of inspiratory mouth pressures (PI max , P 0.1 ) help to classify the underlying disorder and to clarify treatment strategies. For further information and to complete the diagnostic of disturbances of the ventilation, diffusion and/or perfusion (capillar-)arterial bloodgases at rest and under physical strain sometimes amended by ergospirometry are recommended. Ideally, lung function measurements are amended by radiological and nuclear medicine techniques. (orig.) [de

Full Text Available Coronary angiography underestimates or overestimates lesion severity, but still remains the cornerstone in the decision making for revascularization for an overwhelming majority of interventional cardiologists. Guidelines recommend and endorse non invasive functional evaluation ought to precede revascularization. In real world practice, this is adopted in less than 50% of patients who go on to have some form of revascularization. Fractional flow reserve (FFR is the ratio of maximal blood flow in a stenotic coronary relative to maximal flow in the same vessel, were it normal. Being independent of changes in heart rate, BP or prior infarction; and take into account the contribution of collateral blood flow. It is a majorly specific index with a reasonably high sensitivity (88%, specificity (100%, positive predictive value (100%, and overall accuracy (93%. Whilst FFR provides objective determination of ischemia and helps select appropriate candidates for revascularization (for both CABG and PCI in to cath lab itself before intervention, whereas intravascular ultrasound/optical coherence tomography guidance in PCI can secure the procedure by optimizing stent expansion. Functional angioplasty simply is incorporating both intravascular ultrasound and FFR into our daily Intervention practices.

Morphologic changes of spinal canal and dural sac during spinal movement (flexion-extension) were analysed and reported with the base of cross sectional anatomy, as early as 1942. After that, this movement was emphasized and used in myelography in many countries under the name of functional myelography, for accurate diagnosis of spinal stenosis as herniated disc, but nor used commonly in Korea. Authors analysed functional myelographic findings of 78 cases, 37 of normal and 41 of surgically confirmed herniated disc, to intend to confirm the necessity of spinal movement during myelography. The results were as follows; 1. In normal group, anterior border of dural sac is stright with flexion, but indented in extension at the level of intervertebral space and this indentation is less prominent at L5-S1. 2. In normal group with extension, posterior indentation of dural sac is more prominent at the level of intervertebral space than body, A-P diameter of dural sac is narrowed all the level of intervertebral space except L5-S1,and dural sac moved anteriorly (near to the posterior portion of spinal body or intervertebral space) at the level L5-S1 and all spinal body. 3. In disc patient, anterior indentation of dural sac is persist in both views (flexion and extension) and much more exaggerated with extension, but less prominent at L5-S1. 4. In herniated disc patient with extension, anterior movement of anterior dural border at the level of L5-S1 is much decreased than normal.

Morphologic changes of spinal canal and dural sac during spinal movement (flexion-extension) were analysed and reported with the base of cross sectional anatomy, as early as 1942. After that, this movement was emphasized and used in myelography in many countries under the name of functional myelography, for accurate diagnosis of spinal stenosis as herniated disc, but nor used commonly in Korea. Authors analysed functional myelographic findings of 78 cases, 37 of normal and 41 of surgically confirmed herniated disc, to intend to confirm the necessity of spinal movement during myelography. The results were as follows; 1. In normal group, anterior border of dural sac is stright with flexion, but indented in extension at the level of intervertebral space and this indentation is less prominent at L5-S1. 2. In normal group with extension, posterior indentation of dural sac is more prominent at the level of intervertebral space than body, A-P diameter of dural sac is narrowed all the level of intervertebral space except L5-S1,and dural sac moved anteriorly (near to the posterior portion of spinal body or intervertebral space) at the level L5-S1 and all spinal body. 3. In disc patient, anterior indentation of dural sac is persist in both views (flexion and extension) and much more exaggerated with extension, but less prominent at L5-S1. 4. In herniated disc patient with extension, anterior movement of anterior dural border at the level of L5-S1 is much decreased than normal.

The material presented in this book is suited for a first course in Functional Analysis which can be followed by Masters students. While covering all the standard material expected of such a course, efforts have been made to illustrate the use of various theorems via examples taken from differential equations and the calculus of variations, either through brief sections or through exercises. In fact, this book will be particularly useful for students who would like to pursue a research career in the applications of mathematics. The book includes a chapter on weak and weak topologies and their applications to the notions of reflexivity, separability and uniform convexity. The chapter on the Lebesgue spaces also presents the theory of one of the simplest classes of Sobolev spaces. The book includes a chapter on compact operators and the spectral theory for compact self-adjoint operators on a Hilbert space. Each chapter has large collection of exercises at the end. These illustrate the results of the text, show ...

The world resources of all clays are extremely large. Among the various types of clays, the world mine production of kaolin in 2016 was 37.0 Mt, the largest mined clay. Kaolin is traditionally used in ceramics, refractories and as paper coating and filling. But kaolin, as it is demonstrated in this paper, has a bright potential for use in non-traditional, high value-added, applications. This is particularly true for its principal component: the mineral species kaolinite which has a chemical structure allowing its functionalization, leading to a variety of potential applications. Kaolinite is a layered 1 : 1 clay mineral, the layer being made of two different sheets, a tetrahedral silica sheet and an octahedral alumina sheet. Large dipole-dipole interactions, in addition to a network of H-bonds, link the siloxane surface of a layer to the aluminol surface of another layer, making intercalation of guest species in kaolinite challenging. There is however a limited number of molecular units (molecules or salts) that can directly intercalate in kaolinite to form "pre-intercalates". Once intercalated these molecular units can be exchanged by a large number and variety of guests, providing access to the interlayer space of kaolinite, and to its reactive aluminol internal surfaces. The intercalation of molecules of pharmacological interest showed the potential of kaolinite to act as a slow-releasing agent for drugs, and the intercalation of polymers resulted in the creation of intercalated nanocomposites. The intercalation of ionic liquids gave materials with ionic conductivity properties in the solid-state. Intercalates are however unstable in water. One needed to make these organo-inorgano nanohybrid materials resistant to hydrolysis and more thermally stable. The network of aluminol groups on the internal surfaces of kaolinite offers the opportunity to design and create controlled organo-inorgano nanohybrid materials, taking advantage of their reactivity, in

Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... Programming in R, you’ll see how we can replace loops, which can have side-effects, with recursive functions that can more easily avoid them. In addition, the book covers why you shouldn't use recursion when loops are more efficient and how you can get the best of both worlds. Functional programming...... functions by combining simpler functions. You will: Write functions in R including infix operators and replacement functions Create higher order functions Pass functions to other functions and start using functions as data you can manipulate Use Filer, Map and Reduce functions to express the intent behind...

Asserts that though functional training is vital in all sporting preparation, it is only one aspect of the overall process. The paper defines functional training; discusses facets of functionality, functionality and balancing drills, and functional training and periodization; and concludes that functionality is best defined in terms of the outcome…

Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. pointspreadfunction (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the pointspreadfunction of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

Optimization of the AIR-algorithm for improved convergence and performance. The AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. pointspreadfunction (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... Programming in R, you’ll see how we can replace loops, which can have side-effects, with recursive functions that can more easily avoid them. In addition, the book covers why you shouldn't use recursion when loops are more efficient and how you can get the best of both worlds. Functional programming...

... deconvolution procedure that recovers images that have been blurred by a known pointspreadfunction. The.... Wilson RM. Nanodiamonds are promising quantum probes of living cells. Phys Today 2011 Aug;64(8):17. [doi...

Current methods for imaging joint motion are limited to either two-dimensional (2D) video fluoroscopy, or to animated motions from a series of static three-dimensional (3D) images. 3D movement patterns can be detected from biplane fluoroscopy images matched with computed tomography images. This involves several x-ray modalities and sophisticated 2D to 3D matching for the complex wrist joint. We present a method for the acquisition of dynamic 3D images of a moving joint. In our method a 3D-rotational x-ray (3D-RX) system is used to image a cyclically moving joint. The cyclic motion is synchronized to the x-ray acquisition to yield multiple sets of projection images, which are reconstructed to a series of time resolved 3D images, i.e., four-dimensional rotational x ray (4D-RX). To investigate the obtained image quality parameters the full width at half maximum (FWHM) of the pointspreadfunction (PSF) via the edge spread function and the contrast to noise ratio between air and phantom were determined on reconstructions of a bullet and rod phantom, using 4D-RX as well as stationary 3D-RX images. The CNR in volume reconstructions based on 251 projection images in the static situation and on 41 and 34 projection images of a moving phantom were 6.9, 3.0, and 2.9, respectively. The average FWHM of the PSF of these same images was, respectively, 1.1, 1.7, and 2.2 mm orthogonal to the motion and parallel to direction of motion 0.6, 0.7, and 1.0 mm. The main deterioration of 4D-RX images compared to 3D-RX images is due to the low number of projection images used and not to the motion of the object. Using 41 projection images seems the best setting for the current system. Experiments on a postmortem wrist show the feasibility of the method for imaging 3D dynamic joint motion. We expect that 4D-RX will pave the way to improved assessment of joint disorders by detection of 3D dynamic motion patterns in joints

Purpose: Prompt, reliable detection of intracranial hemorrhage (ICH) is essential for treatment of stroke and traumatic brain injury, and would benefit from availability of imaging directly at the point-of-care. This work reports the performance evaluation of a clinical prototype of a cone-beam CT (CBCT) system for ICH imaging and introduces novel algorithms for model-based reconstruction with compensation for data truncation and patient motion. Methods: The tradeoffs in dose and image quality were investigated as a function of analytical (FBP) and model-based iterative reconstruction (PWLS) algorithm parameters using phantoms with ICH-mimicking inserts. Image quality in clinical applications was evaluated in a human cadaver imaged with simulated ICH. Objects outside of the field of view (FOV), such as the head-holder, were found to introduce challenging truncation artifacts in PWLS that were mitigated with a novel multi-resolution reconstruction strategy. Following phantom and cadaver studies, the scanner was translated to a clinical pilot study. Initial clinical experience indicates the presence of motion in some patient scans, and an image-based motion estimation method that does not require fiducial tracking or prior patient information was implemented and evaluated. Results: The weighted CTDI for a nominal scan technique was 22.8 mGy. The high-resolution FBP reconstruction protocol achieved < 0.9 mm full width at half maximum (FWHM) of the pointspreadfunction (PSF). The PWLS soft-tissue reconstruction showed <1.2 mm PSF FWHM and lower noise than FBP at the same resolution. Effects of truncation in PWLS were mitigated with the multi-resolution approach, resulting in 60% reduction in root mean squared error compared to conventional PWLS. Cadaver images showed clear visualization of anatomical landmarks (ventricles and sulci), and ICH was conspicuous. The motion compensation method was shown in clinical studies to restore visibility of fine bone structures

Dedicated cone beam breast CT (CBBCT) suffers from x-ray scatter contamination. We aim to identify the source of the significant difference between the scatter distributions estimated by two recent methods proposed by our group and to investigate its effect on CBBCT image quality. We recently proposed two novel methods of scatter correction for CBBCT, using a library based (LB) technique and a forward projection (FP) model. Despite similar enhancement on CBBCT image qualities, these two methods obtain very different scatter distributions. We hypothesize that the off-focus radiation (OFR) is the contributor and results in nontrivial signals in x-ray projections, which is ignored in the scatter estimation via the LB method. Experiments using a thin wire test tool are designed to study the effect of OFR on CBBCT spatial resolution by measuring the pointspreadfunction (PSF) and the modulation transfer function (MTF). A narrow collimator setting is used to suppress the OFR-induced signals. In addition, "PSFs" and "MTFs" are measured on clinical CBBCT images obtained by the LB and FP methods using small calcifications as point sources. The improvement of spatial resolution achieved by suppressing OFR in the wire experiment as well as in the clinical study is quantified by the improvement ratios of PSFs and spatial frequencies at different MTF values. Our hypothesis that OFR causes the imaging difference between the FP and LB methods is verified if these ratios obtained from experimental and clinical data are consistent. In the wire experiment, the results show that suppression of OFR increases the maximum signal of the PSF by about 14% and reduces the full-width-at-half-maximum (FWHM) by about 12.0%. Similar improvement on spatial resolution is achieved by the FP method compared with the LB method in the patient study. The improvement ratios of spatial frequencies at different MTF values without OFR match very well in both studies at a level of around 16%, with an

Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET pointspreadfunction (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET pointspreadfunction (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

Physical function was declined in aging as well as sensory function in human. Motor slowness and unbalance gait occur as well as decline of ability visual acuity and hearing let elderly people live in limited daily activity. Psychological functions are also thought to be decline in aging. In International Classification of Functioning, Disability and Health(ICF), psychological functions are classified into attention, memory, psychomotor, emotion, perception, thought, higher-level cognitive functions, language, calculation, sequencing complex movements, experience of self and time functions and unspecified functions. It is difficult to assess an individual psychological function itself, because some functions may affect each other and results of evaluations of a psychological function may not represent the meaning of the function. There were numerous reports on physical function in aging in a cross sectional or a longitudinal study design. In this article, we review changes of psychological function in aging.

Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.

Functionality and homogeneity are two of the five Sustainable Safety principles. The functionality principle aims for roads to have but one exclusive function and distinguishes between traffic function (flow) and access function (residence). The homogeneity principle aims at differences in mass,

Simultaneous imaging systems combining positron emission tomography (PET) and magnetic resonance imaging (MRI) have been actively investigated. A PET/MR imaging system (GE Healthcare) comprised of a time-of-flight (TOF) PET system utilizing silicon photomultipliers (SiPMs) and 3-tesla (3T) MRI was recently installed at our institution. The small-ring (60 cm diameter) TOF PET subsystem of this PET/MRI system can generate images with higher spatial resolution compared with conventional PET systems. We have examined theoretically and experimentally the effect of uniform magnetic fields on the spatial resolution for high-energy positron emitters. Positron emitters including 18 F, 124 I, and 68 Ga were simulated in water using the Geant4 Monte Carlo toolkit in the presence of a uniform magnetic field (0, 3, and 7 Tesla). The positron annihilation position was tracked to determine the 3D spatial distribution of the 511-keV gammy ray emission. The full-width at tenth maximum (FWTM) of the positron pointspreadfunction (PSF) was determined. Experimentally, 18 F and 68 Ga line source phantoms in air and water were imaged with an investigational PET/MRI system and a PET/CT system to investigate the effect of magnetic field on the spatial resolution of PET. The full-width half maximum (FWHM) of the line spread function (LSF) from the line source was determined as the system spatial resolution. Simulations and experimental results show that the in-plane spatial resolution was slightly improved at field strength as low as 3 Tesla, especially when resolving signal from high-energy positron emitters in the air-tissue boundary.

Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the pointspreadfunction (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.

The crystalline lens is the responsible for focusing at different distances (accommodation) in the human eye. This organ grows throughout life increasing in size and rigidity. Moreover, due this growth it loses transparency through life, and becomes gradually opacified causing what is known as cataracts. Cataract is the most common cause of visual loss in the world. At present, this visual loss is recoverable by surgery in which the opacified lens is destroyed (phacoemulsification) and replaced by the implantation of an intraocular lens (IOL). If the IOL implanted is mono-focal the patient loses its natural capacity of accommodation, and as a consequence they would depend on an external optic correction to focus at different distances. In order to avoid this dependency, multifocal IOLs designs have been developed. The multi-focality can be achieved by using either, a refractive surface with different radii of curvature (refractive IOLs) or incorporating a diffractive surface (diffractive IOLs). To analyze the optical quality of IOLs it is necessary to test them in an optical bench that agrees with the ISO119679-2 1999 standard (Ophthalmic implants. Intraocular lenses. Part 2. Optical Properties and Test Methods). In addition to analyze the IOLs according to the ISO standard, we have designed an optical bench that allows us to simulate the conditions of a real human eye. To do that, we will use artificial corneas with different amounts of optical aberrations and several illumination sources with different spectral distributions. Moreover, the design of the test bench includes the possibility of testing the IOLs under off-axis conditions as well as in the presence of decentration and/or tilt. Finally, the optical imaging quality of the IOLs is assessed by using common metrics like the Modulation Transfer Function (MTF), the PointSpreadFunction (PSF) and/or the Strehl ratio (SR), or via registration of the IOL's wavefront with a Hartmann-Shack sensor and its

Fungi of the genus Ganoderma are basidiomycetes that have been used as traditional medicine in Asia and have been shown to exhibit various pharmacological activities. We recently found that PS-F2, a polysaccharide fraction purified from the submerged culture broth of Ganoderma formosanum, stimulates the maturation of dendritic cells and primes a T helper 1 (Th1)-polarized adaptive immune response in vivo. In this study, we investigated whether the immune adjuvant function of PS-F2 can stimulate antitumor immune responses in tumor-bearing mice. Continuous intraperitoneal or oral administration of PS-F2 effectively suppressed the growth of colon 26 (C26) adenocarcinoma, B16 melanoma, and sarcoma 180 (S180) tumor cells in mice without adverse effects on the animals' health. PS-F2 did not cause direct cytotoxicity on tumor cells, and it lost the antitumor effect in mice with severe combined immunodeficiency (SCID). CD4(+) T cells, CD8(+) T cells, and serum from PS-F2-treated tumor-bearing mice all exhibited antitumor activities when adoptively transferred to naïve animals, indicating that PS-F2 treatment stimulates tumor-specific cellular and humoral immune responses. These data demonstrate that continuous administration of G. formosanum polysaccharide PS-F2 can activate host immune responses against ongoing tumor growth, suggesting that PS-F2 can potentially be developed into a preventive/therapeutic agent for cancer immunotherapy.

This invention relates to the investigation of body function, especially small bowel function but also liver function, using bile acids and bile salts or their metabolic precursors labelled with radio isotopes and selenium or tellurium. (author)

International Series of Monographs in Natural Philosophy, Volume 32: Random Functions and Turbulence focuses on the use of random functions as mathematical methods. The manuscript first offers information on the elements of the theory of random functions. Topics include determination of statistical moments by characteristic functions; functional transformations of random variables; multidimensional random variables with spherical symmetry; and random variables and distribution functions. The book then discusses random processes and random fields, including stationarity and ergodicity of random

Purpose: In digital breast tomosynthesis (DBT) systems capable of digital mammography (DM), Al filters are used during DBT and K-edge filters during DM. The potential for standardizing the x-ray filters with Al, instead of K-edge filters, was investigated with intent to reduce exposure duration and to promote a simpler system design. Methods: Analytical computations of the half-value thickness (HVT) and the photon fluence per mAs (photons/mm2/mAs) for K-edge filters (50µm Rh; 50µm Ag) were compared with Al filters of varying thickness. Two strategies for matching the HVT from K-edge and Al filtered spectra were investigated: varying the kVp for fixed Al thickness, or varying the Al thickness at matched kVp. For both strategies, Al filters were an order of magnitude thicker than K-edge filters. Hence, Monte Carlo simulations were conducted with the GEANT4 toolkit to determine if the scatter-to-primary ratio (SPR) and the pointspreadfunction of scatter (scatter PSF) differed between Al and K-edge filters. Results: Results show the potential for replacing currently used Kedge filters with Al. For fixed Al thickness (700µm), ±1 kVp and +(1–3) kVp change, matched HVT of Rh and Ag filtered spectra. At matched kVp, Al thickness range (650,750)µm and (750,860)µm matched the HVT from Rh and Ag filtered spectra. Photon fluence/mAs with Al filters were 1.5–2.5 times higher, depending on kVp and Al thickness, compared to K-edge filters. Although Al thickness was an order higher than K-edge filters, neither the SPR nor the scatter PSF differed from K-edge filters. Conclusion: The use of Al filters for digital mammography is potentially feasible. The increased fluence/mAs with Al could decrease exposure duration for the combined DBT+DM exam and simplify system design. Effect of x-ray spectrum change due to Al filtration on radiation dose, signal, noise, contrast and related metrics are being investigated. Funding support: Supported in part by NIH R21CA176470 and R01

Urethral pressure profilometry (UPP) is used in the diagnosis of stress urinary incontinence (SUI) which is a significant medical, social, and economic problem. Low spatial pressure resolution, common occurrence of artifacts, and uncertainties in data location limit the diagnostic value of UPP. To overcome these limitations, high definition urethral pressure profilometry (HD-UPP) combining enhanced UPP hardware and signal processing algorithms has been developed. In this work, we present the different signal processing steps in HD-UPP and show experimental results from female minipigs. We use a special microtip catheter with high angular pressure resolution and an integrated inclination sensor. Signals from the catheter are filtered and time-correlated artifacts removed. A signal reconstruction algorithm processes pressure data into a detailed pressure image on the urethra's inside. Finally, the pressure distribution on the urethra's outside is calculated through deconvolution. A mathematical model of the urethra is contained in a point-spread-function (PSF) which is identified depending on geometric and material properties of the urethra. We additionally investigate the PSF's frequency response to determine the relevant frequency band for pressure information on the urinary sphincter. Experimental pressure data are spatially located and processed into high resolution pressure images. Artifacts are successfully removed from data without blurring other details. The pressure distribution on the urethra's outside is reconstructed and compared to the one on the inside. Finally, the pressure images are mapped onto the urethral geometry calculated from inclination and position data to provide an integrated image of pressure distribution, anatomical shape, and location. With its advanced sensing capabilities, the novel microtip catheter collects an unprecedented amount of urethral pressure data. Through sequential signal processing steps, physicians are provided with

The Deep Space Climate Observatory (DSCOVR) is designed to study the daytime Earth radiation budget by means of onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). EPIC imager observes in several shortwave bands (317-780 nm), while NISTAR measures the top-of-atmosphere (TOA) whole-disk radiance in shortwave and total broadband windows. Calculation of albedo and outgoing longwave flux requires a high-resolution scene identification such as the radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers. These properties have to be co-located with EPIC imager pixels to provide scene identification and to select anisotropic directional models, which are then used to adjust the NISTAR-measured radiance and subsequently obtain the global daytime shortwave and longwave fluxes. This work presents an algorithm for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. The highest quality observation is selected by means of an aggregated rating which incorporates several factors such as the nearest time relative to EPIC observation, lowest viewing zenith angle, and others. This process provides a smoother transition and avoids abrupt changes in the merged composite data. Higher spatial accuracy in the composite product is achieved by using the inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into the EPIC-view domain by convolving composite pixels with the EPIC pointspreadfunction (PSF) defined with a half-pixel accuracy. Within every EPIC footprint, the PSF-weighted average radiances and cloud properties are computed for each cloud phase and then stored within five data subsets (clear-sky, water cloud, ice cloud, total cloud, and no

We propose an ecient approximation to the nonlinear phase diversity method for wavefront reconstruction method from intensity measurements in order to avoid the shortcomings of the nonlinear phase diversity method that prevent its real-time application, such as its computationally complex and the presence of local minima. The new method is called linear sequential phase diversity (LSPD). The method assumes that residual phase aberration is small and makes use of a rst order Taylor expansion of the pointspreadfunction (PSF). The Taylor expansion is performed in two dierent phase diversities, that can be arbitrary (large) pupil shapes in order to optimize the phase retrieval. For static aberrations LSPD makes use of two images that are collected at each iteration step of the algorithm. In each step the residual phase aberrations are estimated by solving a linear least squares problem, followed by the use of a deformable mirror to correct for the aberrations. The computational complexity of LSPD is O(m*m) - where m*m is the number of pixels. For the static case the convergence of the LSPD iterations have been studied and experimentally veried. In an extensive comparison the method is compared with the recently proposed method of [1]. This study demonstrates the improved performance both computationally and in accuracy with respect to existing competitors that also linearize the PSF. A further contribution of the paper is that we extend the static LSPD method to the case of dynamic wavefront reconstruction based on intensity measurements. Here the dynamics are assumed to be modelled standardly by a linear innovation model such that its spectrum e.g. approximates that given by Kolmogorov. The advantage of the application of the dynamic variant of the LSPD method is that in closed-loop the assumption that the residual phase aberration is small is justiable, since the goal of the controller is to reduce (minimize) the residual phase aberration. This unique contribution

Purpose: In digital breast tomosynthesis (DBT) systems capable of digital mammography (DM), Al filters are used during DBT and K-edge filters during DM. The potential for standardizing the x-ray filters with Al, instead of K-edge filters, was investigated with intent to reduce exposure duration and to promote a simpler system design. Methods: Analytical computations of the half-value thickness (HVT) and the photon fluence per mAs (photons/mm2/mAs) for K-edge filters (50µm Rh; 50µm Ag) were compared with Al filters of varying thickness. Two strategies for matching the HVT from K-edge and Al filtered spectra were investigated: varying the kVp for fixed Al thickness, or varying the Al thickness at matched kVp. For both strategies, Al filters were an order of magnitude thicker than K-edge filters. Hence, Monte Carlo simulations were conducted with the GEANT4 toolkit to determine if the scatter-to-primary ratio (SPR) and the pointspreadfunction of scatter (scatter PSF) differed between Al and K-edge filters. Results: Results show the potential for replacing currently used Kedge filters with Al. For fixed Al thickness (700µm), ±1 kVp and +(1–3) kVp change, matched HVT of Rh and Ag filtered spectra. At matched kVp, Al thickness range (650,750)µm and (750,860)µm matched the HVT from Rh and Ag filtered spectra. Photon fluence/mAs with Al filters were 1.5–2.5 times higher, depending on kVp and Al thickness, compared to K-edge filters. Although Al thickness was an order higher than K-edge filters, neither the SPR nor the scatter PSF differed from K-edge filters. Conclusion: The use of Al filters for digital mammography is potentially feasible. The increased fluence/mAs with Al could decrease exposure duration for the combined DBT+DM exam and simplify system design. Effect of x-ray spectrum change due to Al filtration on radiation dose, signal, noise, contrast and related metrics are being investigated. Funding support: Supported in part by NIH R21CA176470 and R01

Purpose: In Contrast Enhanced Spectral Mammography (CESM), Rh filter is often used during low-energy image acquisition. The potential for using Ag, In and Sn filters, which exhibit K-edge closer to, and just below that of Iodine, instead of the Rh filter, was investigated for the low-energy image acquisition. Methods: Analytical computations of the half-value thickness (HVT) and the photon fluence per mAs (photons/mm2/mAs) for 50µm Rh were compared with other potential K-edge filters (Ag, In and Sn), all with K-absorption edge below that of Iodine. Two strategies were investigated: fixed kVp and filter thickness (50µm for all filters) resulting in HVT variation, and fixed kVp and HVT resulting in variation in Ag, In and Sn thickness. Monte Carlo simulations (GEANT4) were conducted to determine if the scatter-to-primary ratio (SPR) and the pointspreadfunction of scatter (scatter PSF) differed between Rh and other K-edge filters. Results: Ag, In and Sn filters (50µm thick) increased photon fluence/mAs by 1.3–1.4, 1.8–2, and 1.7–2 at 28-32 kVp compared to 50µm Rh, which could decrease exposure time. Additionally, the fraction of spectra closer to and just below Iodine’s K-edge increased with these filters, which could improve post-subtraction image contrast. For HVT matched to 50µm Rh filtered spectra, the thickness range for Ag, In, and Sn were (41,44)µm, (49,55)µm and (45,53)µm, and increased photon fluence/mAs by 1.5–1.7, 1.6–2, and 1.6–2.2, respectively. Monte Carlo simulations showed that neither the SPR nor the scatter PSF of Ag, In and Sn differed from Rh, indicating no additional detriment due to x-ray scatter. Conclusion: The use of Ag, In and Sn filters for low-energy image acquisition in CESM is potentially feasible and could decrease exposure time and may improve post-subtraction image contrast. Effect of these filters on radiation dose, contrast, noise and associated metrics are being investigated. Funding Support: Supported in

Full Text Available Cortical bone is an important contributor to bone strength and is pivotal to understand the etiology of osteoporotic fractures and the specific mechanisms of antiosteoporotic treatment regimen. 3D computed tomography (CT can be used to measure cortical thickness, density, and mass in the proximal femur, lumbar vertebrae, and distal forearm. However, the spatial resolution of clinical whole body CT scanners is limited by radiation exposure; partial volume artefacts severely impair the accurate assessment of cortical parameters, in particular in locations where the cortex is thin such as in the lumbar vertebral bodies or in the femoral neck.Model-based deconvolution approaches recover the cortical thickness by numerically deconvolving the image along 1D profiles using an estimated scanner pointspreadfunction (PSF and a hypothesized uniform cortical bone mineral density (reference density. In this work we provide a new essentially analytical unique solution to the model-based cortex recovery problem using few characteristics of the measured profile and thus eliminate the non-linear optimization step for deconvolution. Also, the proposed approach allows to get rid of the PSF in the model and reduces sensitivity to errors in the reference density. Additionally, run-time and memory effective computation of cortical thickness was achieved with the help of a lookup table.The method accuracy and robustness was validated and compared to that of a deconvolution approach recently proposed for cortical bone and of the 50% relative threshold technique: in a simulated environment with noise and various error levels in the reference density and using CT acquisitions of the European Forearm Phantom (EFP II, a modification of a widely used anthropomorphic standard of cortical and trabecular bone compartments that was scanned with various scan protocols.Results of simulations and of phantom data analysis verified the following properties of the new method: 1

Diffusion magnetic resonance imaging (d-MRI) is a powerful non-invasive and non-destructive technique for characterizing brain tissue on the microscopic scale. However, the lack of validation of d-MRI by independent experimental means poses an obstacle to accurate interpretation of data acquired using this method. Recently, structure tensor analysis has been applied to light microscopy images, and this technique holds promise to be a powerful validation strategy for d-MRI. Advantages of this approach include its similarity to d-MRI in terms of averaging the effects of a large number of cellular structures, and its simplicity, which enables it to be implemented in a high-throughput manner. However, a drawback of previous implementations of this technique arises from it being restricted to 2D. As a result, structure tensor analyses have been limited to tissue sectioned in a direction orthogonal to the direction of interest. Here we describe the analytical framework for extending structure tensor analysis to 3D, and utilize the results to analyze serial image "stacks" acquired with confocal microscopy of rhesus macaque hippocampal tissue. Implementation of 3D structure tensor procedures requires removal of sources of anisotropy introduced in tissue preparation and confocal imaging. This is accomplished with image processing steps to mitigate the effects of anisotropic tissue shrinkage, and the effects of anisotropy in the pointspreadfunction (PSF). In order to address the latter confound, we describe procedures for measuring the dependence of PSF anisotropy on distance from the microscope objective within tissue. Prior to microscopy, ex vivo d-MRI measurements performed on the hippocampal tissue revealed three regions of tissue with mutually orthogonal directions of least restricted diffusion that correspond to CA1, alveus and inferior longitudinal fasciculus. We demonstrate the ability of 3D structure tensor analysis to identify structure tensor orientations that

Full Text Available Yttrium-90 is known to have a low positron emission decay of 32 ppm that may allow for personalized dosimetry of liver cancer therapy with (90Y labeled microspheres. The aim of this work was to image and quantify (90Y so that accurate predictions of the absorbed dose can be made. The measurements were performed within the QUEST study (University of Sydney, and Sirtex Medical, Australia. A NEMA IEC body phantom containing 6 fillable spheres (10-37 mm ∅ was used to measure the 90Y distribution with a Biograph mCT PET/CT (Siemens, Erlangen, Germany with time-of-flight (TOF acquisition. A sphere to background ratio of 8:1, with a total (90Y activity of 3 GBq was used. Measurements were performed for one week (0, 3, 5 and 7 d. he acquisition protocol consisted of 30 min-2 bed positions and 120 min-single bed position. Images were reconstructed with 3D ordered subset expectation maximization (OSEM and pointspreadfunction (PSF for iteration numbers of 1-12 with 21 (TOF and 24 (non-TOF subsets and CT based attenuation and scatter correction. Convergence of algorithms and activity recovery was assessed based on regions-of-interest (ROI analysis of the background (100 voxels, spheres (4 voxels and the central low density insert (25 voxels. For the largest sphere, the recovery coefficient (RC values for the 30 min -2-bed position, 30 min-single bed and 120 min-single bed were 1.12 ± 0.20, 1.14 ± 0.13, 0.97 ± 0.07 respectively. For the smaller diameter spheres, the PSF algorithm with TOF and single bed acquisition provided a comparatively better activity recovery. Quantification of Y-90 using Biograph mCT PET/CT is possible with a reasonable accuracy, the limitations being the size of the lesion and the activity concentration present. At this stage, based on our study, it seems advantageous to use different protocols depending on the size of the lesion.

To explore the feasibility of reducing administered tracer activities and to assess optimal activities for combined 18 F-FDG-PET/MRI in pediatric oncology. 30 18 F-FDG-PET/MRI examinations were performed on 24 patients with known or suspected solid tumors (10 girls, 14 boys, age 12 ± 5.6 [1-18] years; PET scan duration: 4 min per bed position). Low-activity PET images were retrospectively simulated from the originally acquired data sets using randomized undersampling of list mode data. PET data of different simulated administered activities (0.25-2.5 MBq/kg body weight) were reconstructed with or without pointspreadfunction (PSF) modeling. Mean and maximum standardized uptake values (SUV mean and SUV max ) as well as SUV variation (SUV var ) were measured in physiologic organs and focal FDG-avid lesions. Detectability of organ structures and of focal 18 F-FDG-avid lesions as well as the occurrence of false-positive PET lesions were assessed at different simulated tracer activities. Subjective image quality steadily declined with decreasing tracer activities. Compared to the originally acquired data sets, mean relative deviations of SUV mean and SUV max were below 5 % at 18 F-FDG activities of 1.5 MBq/kg or higher. Over 95 % of anatomic structures and all pathologic focal lesions were detectable at 1.5 MBq/kg 18 F-FDG. Detectability of anatomic structures and focal lesions was significantly improved using PSF. No false-positive focal lesions were observed at tracer activities of 1 MBq/kg 18 F-FDG or higher. Administration of 18 F-FDG activities of 1.5 MBq/kg is, thus, feasible without obvious diagnostic shortcomings, which is equivalent to a dose reduction of more than 50 % compared to current recommendations. Significant reduction in administered 18 F-FDG tracer activities is feasible in pediatric oncologic PET/MRI. Appropriate activities of 18 F-FDG or other tracers for specific clinical questions have to be further established in selected patient

Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spreadfunction, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

4D ultrafast ultrasound imaging was recently shown using a 2D matrix (i.e. fully populated) connected to a 1024-channel ultrafast ultrasound scanner. In this study, we investigate the row-column addressing (RCA) matrix approach, which allows a reduction of independent channels from N × N to N + N, with a dedicated beamforming strategy for ultrafast ultrasound imaging based on the coherent compounding of orthogonal plane wave (OPW). OPW is based on coherent compounding of plane wave transmissions in one direction with receive beamforming along the orthogonal direction and its orthogonal companion sequence. Such coherent recombination of complementary orthogonal sequences leads to the virtual transmit focusing in both directions which results into a final isotropic pointspreadfunction (PSF). In this study, a 32 × 32 2D matrix array probe (1024 channels), centered at 5 MHz was considered. An RCA array, of same footprint with 32 + 32 elements (64 channels), was emulated by summing the elements either along a line or a column in software prior to beamforming. This approach allowed for the direct comparison of the 32 + 32 RCA scheme to the optimal fully sampled 32 × 32 2D matrix configuration, which served as the gold standard. This approach was first studied through PSF simulations and then validated experimentally on a phantom consisting of anechoic cysts and echogenic wires. The contrast-to-noise ratio and the lateral resolution of the RCA approach were found to be approximately equal to half (in decibel) and twice the values, respectively, obtained when using the 2D matrix approach. Results in a Doppler phantom and the human humeral artery in vivo confirmed that ultrafast Doppler imaging can be achieved with reduced performances when compared against the equivalent 2D matrix. Volumetric anatomic Doppler rendering and voxel-based pulsed Doppler quantification are presented as well. OPW compound imaging

4D ultrafast ultrasound imaging was recently shown using a 2D matrix (i.e. fully populated) connected to a 1024-channel ultrafast ultrasound scanner. In this study, we investigate the row-column addressing (RCA) matrix approach, which allows a reduction of independent channels from N × N to N + N, with a dedicated beamforming strategy for ultrafast ultrasound imaging based on the coherent compounding of orthogonal plane wave (OPW). OPW is based on coherent compounding of plane wave transmissions in one direction with receive beamforming along the orthogonal direction and its orthogonal companion sequence. Such coherent recombination of complementary orthogonal sequences leads to the virtual transmit focusing in both directions which results into a final isotropic pointspreadfunction (PSF). In this study, a 32 × 32 2D matrix array probe (1024 channels), centered at 5 MHz was considered. An RCA array, of same footprint with 32 + 32 elements (64 channels), was emulated by summing the elements either along a line or a column in software prior to beamforming. This approach allowed for the direct comparison of the 32 + 32 RCA scheme to the optimal fully sampled 32 × 32 2D matrix configuration, which served as the gold standard. This approach was first studied through PSF simulations and then validated experimentally on a phantom consisting of anechoic cysts and echogenic wires. The contrast-to-noise ratio and the lateral resolution of the RCA approach were found to be approximately equal to half (in decibel) and twice the values, respectively, obtained when using the 2D matrix approach. Results in a Doppler phantom and the human humeral artery in vivo confirmed that ultrafast Doppler imaging can be achieved with reduced performances when compared against the equivalent 2D matrix. Volumetric anatomic Doppler rendering and voxel-based pulsed Doppler quantification are presented as well. OPW compound imaging

We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected pointspreadfunctions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results presented in Correia and Teixeira [J. Opt. Soc. Am. A31, 2763 (2014)JOAOD60740-323210.1364/JOSAA.31.002763] to the closed-loop case with predictive controllers and generalize the analytical modeling of Rigaut et al. [Proc. SPIE3353, 1038 (1998)PSISDG0277-786X10.1117/12.321649], Flicker [Technical Report (W. M. Keck Observatory, 2007)], and Jolissaint [J. Eur. Opt. Soc.5, 10055 (2010)1990-257310.2971/jeos.2010.10055]. We follow closely the developments of Ellerbroek [J. Opt. Soc. Am. A22, 310 (2005)JOAOD60740-323210.1364/JOSAA.22.000310] and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors while minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that ∼60 nm rms error reduction can be achieved with the distributed Kalman filter embodying antialiasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few λ/D separations (∼1-5λ/D) for a

Stellar binaries are a common byproduct of star formation and therefore inform us on the processes of collapse and fragmentation of prestellar cores. While multiplicity surveys generally reveal an extensive diversity of multiple systems, with broad ranges of semi-major axis, mass ratio and eccentricities, one remarkable feature that was identified in the last two decades is the so-called brown dwarf desert, i.e., the apparent paucity of (non-planetary) substellar companions to solar-type stars. This "desert" was primarily identified among spectroscopic binaries but also appears to be a significant feature of wider, visual binaries. The physical origin of this feature has not been fully accounted for but is likely established during the formation of the systems. One way to shed new light on this question is to study the frequency of low-mass stellar companions to intermediate-mass star (late-B type, or 3-5 Msun), as those form through a similar, albeit scaled-up, mechanism as solar-type stars. Here we present preliminary results from two adaptive-optics based surveys to search for such multiple systems. Specifically, we are using the new ShaneAO system on the Lick3m telescope (~100 stars observed to date) and the Gemini Planet Imager (45 stars observed). We are targeting stars located both in open clusters and scattered in the Galactic field to search for potential evidence of dynamic evolution. To identify candidate low-mass companions as close in to target stars, we use advanced pointspreadfunction (PSF) subtraction algorithms, specifically implementations of the LOCI and KLIP algorithms. In the case of the ShaneAO observations, which do not allow for field rotation, we use LOCI in combination with Reference Differential Imaging (ADI), using our library of science images as input for PSF subtraction. In this contribution, we will discuss the potential of ShaneAO to reveal faint, subarcsecond companions in this context and present candidate companions from both

configurations (achievable parameters). A constrained optimization technique is used to generate a set of ultrasound system parameters as a function of operating frequency. A sample set of transducer and transducer array configurations is used to generate associated pointspreadfunctions. The effects of the acoustic features of tissue on backscattered ultrasound data are examined through the interaction of the system pointspreadfunctions and a tissue phantom model. Tissue characterization experiments are performed on material tissue phantoms in order to obtain information about its acoustic features (sound speed, density, scatterer shape, size and density). From the experimental data finite element models (FEM) of the tissue phantom are created. A simulation of the interaction of the pointspreadfunctions (PSF) of the cMUT arrays and the computational tissue phantom (FEM) is then performed and estimates of the statistics of the backscatter are used to estimate scatterer density. In addition frequency dependence of the backscatter is used to estimate scatterer diameter and shape. The ultrasound array that provides the most accurate estimates of scatterer diameter and density is considered to be the array most suited to obtain quantitative information about the tissue under examination.

It is shown that for linear dynamical systems with quadratic supply rates, a storage function can always be written as a quadratic function of the state of an associated linear dynamical system. This dynamical system is obtained by combining the dynamics of the original system with the dynamics of

Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant to incorp......Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant...... to incorporate into lexicographical theory, some scholars have since then assumed that this function exists(2), including the author of this contribution. In Agerbo (2016), I present arguments supporting the incorporation of the interpretive function into the function theory and suggest how non-linguistic signs...... can be treated in specific dictionary articles. However, in the current article, due to the results of recent research, I argue that the interpretive function should not be considered an individual main function. The interpretive function, contrary to some of its definitions, is not connected...

-effect formulations, where the observed functional signal is assumed to consist of both fixed and random functional effects. This thesis takes the initial steps toward the development of likelihood-based methodology for functional objects. We first consider analysis of functional data defined on high...

functions have received a huge amount of attention due to new attacks on widely used hash functions. This PhD thesis, having the title "Cryptographic Hash Functions", contains both a general description of cryptographic hash functions, including their applications and expected properties as well as some...

Master functions and discover how to write functional programs in R. In this book, you'll make your functions pure by avoiding side-effects; you’ll write functions that manipulate other functions, and you’ll construct complex functions using simpler functions as building blocks. In Functional...... Programming in R, you’ll see how we can replace loops, which can have side-effects, with recursive functions that can more easily avoid them. In addition, the book covers why you shouldn't use recursion when loops are more efficient and how you can get the best of both worlds. Functional programming...... is a style of programming, like object-oriented programming, but one that focuses on data transformations and calculations rather than objects and state. Where in object-oriented programming you model your programs by describing which states an object can be in and how methods will reveal or modify...

to acting and therefore the only difference between reception and interpretation is that they work with different types of sign. However, the type of sign is not relevant for a function, or rather, it should not be a criterion for distinguishing between functions. The lemma selection for the communicative......Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant...... to incorporate into lexicographical theory, some scholars have since then assumed that this function exists(2), including the author of this contribution. In Agerbo (2016), I present arguments supporting the incorporation of the interpretive function into the function theory and suggest how non-linguistic signs...

Ocular straylight is the combined effect of light scattering in the optical media and the diffuse reflectance from the various fundus layers. The aim of this work was to employ an optical technique to measure straylight at different wavelengths and to identify the optimal conditions for visually relevant optical measurements of straylight. The instrument, based on the double-pass (DP) principle, used a series of uniform disks that were projected onto the retina, allowing the recording of the wide-angle pointspreadfunction (PSF) from its peak and up to 7.3° of visual angle. A liquid crystal wavelength tunable filter was used to select six different wavelengths ranging from 500 to 650 nm. The measurements were performed in nine healthy Caucasian subjects. The straylight parameter was analyzed for small (0.5°) and large (6°) angles. For small angles, the wavelength dependence of straylight matches the transmittance spectrum of hemoglobin, which suggests that diffuse light from the fundus contributes significantly to the total straylight for wavelengths longer than 600 nm. Eyes with lighter pigmentation exhibited higher straylight at all wavelengths. For larger angles, straylight was less dependent on wavelength and eye pigmentation. Small-angle straylight in the eye is affected by the wavelength-dependent properties of the fundus. At those small angles, measurements using wavelengths near the peak of the spectral sensitivity of the eye might be better correlated with the visual aspects of straylight. However, the impact of fundus reflectance on the values of the straylight parameter at larger angles did not depend on the measuring wavelength.

Full Text Available Introduction In computed tomography (CT technology, an optimal radiation dose can be achieved via changing radiation parameters such as mA, pitch factor, rotation time and tube voltage (kVp for diagnostic images. Materials and Methods In this study, the brain, abdomen, and thorax scaning was performed using Toshiba 16-slice scannerand standard AAPM and CTDI phantoms. AAPM phantom was used for the measurement of image-related parameters and CTDI phantom was utilized for the calculation of absorbed dose to patients. Imaging parameters including mA (50-400 mA, pitch factor (1 and 1.5 and rotation time (range of 0.5, 0.75, 1, 1.5 and 2 seconds were considered as independent variables. The brain, abdomen and chest imaging was performed multi-slice and spiral modes. Changes in image quality parameters including contrast resolution (CR and spatial resolution (SR in each condition were measured and determined by MATLAB software. Results After normalizing data by plotting the full width at half maximum (FWHM of pointspreadfunction (PSF in each condition, it was observed that image quality was not noticeably affected by each cases. Therefore, in brain scan, the lowest patient dose was in 150 mA and rotation time of 1.5 seconds. Based on results of scanning of the abdomen and chest, the lowest patient dose was obtained by 100 mA and pitch factors of 1 and 1.5. Conclusion It was found that images with acceptable quality and reliable detection ability could be obtained using smaller doses of radiation, compared to protocols commonly used by operators.

A new portal imager consisting of four vertically stacked conventional electronic portal imaging device (EPID) layers has been constructed in pursuit of improved detective quantum efficiency (DQE). We hypothesize that super-resolution (SR) imaging can also be achieved in such a system by shifting each layer laterally by half a pixel relative to the layer above. Super-resolution imaging will improve resolution and contrast-to-noise ratio (CNR) in megavoltage (MV) planar and cone beam computed tomography (MV-CBCT) applications. Simulations are carried out to test this hypothesis with digital phantoms. To assess planar resolution, 2 mm long iron rods with 0.3 × 0.3 mm 2 square cross-section are arranged in a grid pattern at the center of a 1 cm thick solid water. For measuring CNR in MV-CBCT, a 20 cm diameter digital phantom with 8 inserts of different electron densities is used. For measuring resolution in MV-CBCT, a digital phantom featuring a bar pattern similar to the Gammex™ phantom is used. A 6 MV beam is attenuated through each phantom and detected by each of the four detector layers. Fill factor of the detector is explicitly considered. Projections are blurred with an estimated pointspreadfunction (PSF) before super-resolution reconstruction. When projections from multiple shifted layers are used in SR reconstruction, even a simple shift-add fusion can significantly improve the resolution in reconstructed images. In the reconstructed planar image, the grid pattern becomes visually clearer. In MV-CBCT, combining projections from multiple layers results in increased CNR and resolution. The inclusion of two, three and four layers increases CNR by 40%, 70% and 99%, respectively. Shifting adjacent layers by half a pixel almost doubles resolution. In comparison, using four perfectly aligned layers does not improve resolution relative to a single layer.

The oxygen absorption line imprinted in the scattered light from Earth-like planets has been considered the most promising metabolic biomarker for exolife. We examine the feasibility of the detection of the 1.27 {mu}m oxygen band from habitable exoplanets, in particular, around late-type stars observed with a future instrument on a 30 m class ground-based telescope. We analyzed the night airglow around 1.27 {mu}m with the IRCS/echelle spectrometer on Subaru and found that the strong telluric emission from atmospheric oxygen molecules declines by an order of magnitude by midnight. By compiling nearby star catalogs combined with the sky background model, we estimate the detectability of the oxygen absorption band from an Earth twin, if it exists, around nearby stars. We find that the most dominant source of photon noise for the oxygen 1.27 {mu}m band detection comes from the night airglow if the contribution of the stellar point-spreadfunction (PSF) halo is suppressed enough to detect the planet. We conclude that the future detectors, for which the detection contrast is limited by photon noise, can detect the oxygen 1.27 {mu}m absorption band of Earth twins for {approx}50 candidates of the late-type star. This paper demonstrates the importance of deploying a small inner working angle as an efficient coronagraph and extreme adaptive optics on extremely large telescopes, and clearly shows that doing so will enable the study of potentially habitable planets.

False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of pointspreadfunction (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is a three-layered imaging survey aimed at addressing some of the most important outstanding questions in astronomy today, including the nature of dark matter and dark energy. The survey has been awarded 300 nights of observing time at the Subaru Telescope, and it started in 2014 March. This paper presents the first public data release of HSC-SSP. This release includes data taken in the first 1.7 yr of observations (61.5 nights), and each of the Wide, Deep, and UltraDeep layers covers about 108, 26, and 4 square degrees down to depths of i ˜ 26.4, ˜26.5, and ˜27.0 mag, respectively (5 σ for point sources). All the layers are observed in five broad bands (grizy), and the Deep and UltraDeep layers are observed in narrow bands as well. We achieve an impressive image quality of 0{^''.}6 in the i band in the Wide layer. We show that we achieve 1%-2% pointspreadfunction (PSF) photometry (root mean square) both internally and externally (against Pan-STARRS1), and ˜10 mas and 40 mas internal and external astrometric accuracy, respectively. Both the calibrated images and catalogs are made available to the community through dedicated user interfaces and database servers. In add