Magnetic resonance imaging (MRI) utilizes the magnetic properties of tissues to generate image-forming signals. MRI has exquisite soft-tissue contrast and since tumors are mainly soft-tissues, it offers improved delineation of the target volume and nearby organs at risk. The proposed Magnetic Resonance-only Radiotherapy (MR-only RT) work flow allows for the use of MRI as the sole imaging modality in the radiotherapy (RT) treatment planning of cancer. There are, however, issues with geometric distortions inherent with MR image acquisition processes. These distortions result from imperfections in the main magnetic field, nonlinear gradients, as well as field disturbances introduced by the imaged object. In this thesis, we quantified the effect of system related and patient-induced susceptibility geometric distortions on dose distributions for prostate as well as head and neck cancers. Methods to mitigate these distortions were also studied.

In Study I, mean worst system related residual distortions of 3.19, 2.52 and 2.08 mm at bandwidths (BW) of 122, 244 and 488 Hz/pixel up to a radial distance of 25 cm from a 3T PET/MR scanner was measured with a large field of view (FoV) phantom. Subsequently, we estimated maximum shifts of 5.8, 2.9 and 1.5 mm due to patient-induced susceptibility distortions. VMAT-optimized treatment plans initially performed on distorted CT (dCT) images and recalculated on real CT datasets resulted in a dose difference of less than 0.5%.

The magnetic susceptibility differences at tissue-metallic,-air and -bone interfaces result in local B0 magnetic field inhomogeneities. The distortion shifts caused by these field inhomogeneities can be reduced by shimming. Study II aimed to investigate the use of shimming to improve the homogeneity of local B0 magnetic field which will be beneficial for radiotherapy applications. A shimming simulation based on spherical harmonics modeling was developed. The spinal cord, an organ at risk is surrounded by bone and in close proximity to the lungs may have high susceptibility differences. In this region, mean pixel shifts caused by local B0 field inhomogeneities were reduced from 3.47±1.22 mm to 1.35±0.44 mm and 0.99±0.30 mm using first and second order shimming respectively. This was for a bandwidth of 122 Hz/pixel and an in-plane voxel size of 1×1 mm2. Also examined in Study II as in Study I was the dosimetric effect of geometric distortions on 21 Head and Neck cancer treatment plans. The dose difference in D50 at the PTV between distorted CT and real CT plans was less than 1.0%.

In conclusion, the effect of MR geometric distortions on dose plans was small. Generally, we found patient-induced susceptibility distortions were larger compared with residual system distortions at all delineated structures except the external contour. This information will be relevant when setting margins for treatment volumes and organs at risk.

The current practice of characterizing MR geometric distortions utilizing spatial accuracy phantoms alone may not be enough for an MR-only radiotherapy workflow. Therefore, measures to mitigate patient-induced susceptibility effects in clinical practice such as patient-specific correction algorithms are needed to complement existing distortion reduction methods such as high acquisition bandwidth and shimming.

In this paper, we propose a framework for gradually improving the quality of an already existing image descriptor. The descriptor used in this paper (Afkham et al., 2013) uses the response of a series of discriminative components for summarizing each image. As we will show, this descriptor has an ideal form in which all categories become linearly separable. While, reaching this form is not feasible, we will argue how by replacing a small fraction of these components, it is possible to obtain a descriptor which is, on average, closer to this ideal form. To do so, we initially identify which components do not contribute to the quality of the descriptor and replace them with more robust components. Here, a joint feature selection method is used to find improved components. As our experiments show, this change directly reflects in the capability of the resulting descriptor in discriminating between different categories.

Karolinska University Hospital, Huddinge, Sweden, has long desired to plan hip prostheses with Computed Tomography (CT) scans instead of plain radiographs to save time and patient discomfort. This has not been possible previously as their current software is limited to prosthesis planning on traditional 2D X-ray images. The purpose of this project was therefore to create an application (software) that allows medical professionals to derive a 2D image from CT images that can be used for prosthesis planning.

In order to create the application NumPy and The Visualization Toolkit (VTK) Python code libraries were utilised and tied together with a graphical user interface library called PyQt4. The application includes a graphical interface and methods for optimizing the images for prosthesis planning.

The application was finished and serves its purpose but the quality of the images needs to be evaluated with a larger sample group.

Background: Non-invasive diagnostic imaging of atherosclerotic coronary artery disease (CAD) is frequently carried out with cardiovascular magnetic resonance imaging (CMR) or myocardial perfusion single photon emission computed tomography (MPS). CMR is the gold standard for the evaluation of scar after myocardial infarction and MPS the clinical gold standard for ischemia. Magnetic Resonance Imaging (MRI) is at times difficult for patients and may induce anxiety while patient experience of MPS is largely unknown.

Aims: To evaluate image quality in CMR with respect to the sequences employed, the influence of atrial fibrillation, myocardial perfusion and the impact of patient information. Further, to study patient experience in relation to MRI with the goal of improving the care of these patients.

Method: Four study designs have been used. In paper I, experimental cross-over, paper (II) experimental controlled clinical trial, paper (III) psychometric crosssectional study and paper (IV) prospective intervention study. A total of 475 patients ≥ 18 years with primarily cardiac problems (I-IV) except for those referred for MRI of the spine (III) were included in the four studies.

Result: In patients (n=20) with atrial fibrillation, a single shot steady state free precession (SS-SSFP) sequence showed significantly better image quality than the standard segmented inversion recovery fast gradient echo (IR-FGRE) sequence (I). In first-pass perfusion imaging the gradient echo-echo planar imaging sequence (GREEPI) (n=30) had lower signal-to-noise and contrast–to-noise ratios than the steady state free precession sequence (SSFP) (n=30) but displayed a higher correlation with the MPS results, evaluated both qualitatively and quantitatively (II). The MRIAnxiety Questionnaire (MRI-AQ) was validated on patients, referred for MRI of either the spine (n=193) or the heart (n=54). The final instrument had 15 items divided in two factors regarding Anxiety and Relaxation. The instrument was found to have satisfactory psychometric properties (III). Patients who prior CMR viewed an information video scored significantly (lower) better in the factor Relaxation, than those who received standard information. Patients who underwent MPS scored lower on both factors, Anxiety and Relaxation. The extra video information had no effect on CMR image quality (IV).

Conclusion: Single shot imaging in atrial fibrillation produced images with less artefact than a segmented sequence. In first-pass perfusion imaging, the sequence GRE-EPI was superior to SSFP. A questionnaire depicting anxiety during MRI showed that video information prior to imaging helped patients relax but did not result in an improvement in image quality.

Background To assess myocardial perfusion, steady-state free precession cardiac magnetic resonance (SSFP, CMR) was compared with gradient-echo–echo-planar imaging (GRE-EPI) using myocardial perfusion scintigraphy (MPS) as reference. Methods Cardiac magnetic resonance perfusion was recorded in 30 patients with SSFP and in another 30 patients with GRE-EPI. Timing and extent of inflow delay to the myocardium was visually assessed. Signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were calculated. Myocardial scar was visualized with a phase-sensitive inversion recovery sequence (PSIR). All scar positive segments were considered pathologic. In MPS, stress and rest images were used as in clinical reporting. The CMR contrast wash-in slope was calculated and compared with the stress score from the MPS examination. CMR scar, CMR perfusion and MPS were assessed separately by one expert for each method who was blinded to other aspects of the study. Results Visual assessment of CMR had a sensitivity for the detection of an abnormal MPS at 78% (SSFP) versus 91% (GRE-EPI) and a specificity of 58% (SSFP) versus 84% (GRE-EPI). Kappa statistics for SSFP and MPS was 0·29, for GRE-EPI and MPS 0·72. The ANOVA of CMR perfusion slopes for all segments versus MPS score (four levels based on MPS) had correlation r = 0·64 (SSFP) and r = 0·96 (GRE-EPI). SNR was for normal segments 35·63 ± 11·80 (SSFP) and 17·98 ± 8·31 (GRE-EPI), while CNR was 28·79 ± 10·43 (SSFP) and 13·06 ± 7·61 (GRE-EPI). Conclusion GRE-EPI displayed higher agreement with the MPS results than SSFP despite significantly lower signal intensity, SNR and CNR.

This book is a compilation of reviews about the pathogenesis of Type 1 Diabetes. T1D is a classic autoimmune disease. Genetic factors are clearly determinant but cannot explain the rapid, even overwhelming expanse of this disease. Understanding etiology and pathogenesis of this disease is essential. A number of experts in the field have covered a range of topics for consideration that are applicable to researcher and clinician alike. This book provides apt descriptions of cutting edge technologies and applications in the ever going search for treatments and cure for diabetes. Areas including T cell development, innate immune responses, imaging of pancreata, potential viral initiators, etc. are considered.

The aim of this study was to evaluate tracking performance when an extra reference block is added to a basic block-matching method, where the two reference blocks originate from two consecutive ultrasound frames. The use of an extra reference block was evaluated for two putative benefits: (i) an increase in tracking performance while maintaining the size of the reference blocks, evaluated using in silico and phantom cine loops; (ii) a reduction in the size of the reference blocks while maintaining the tracking performance, evaluated using in vivo cine loops of the common carotid artery where the longitudinal movement of the wall was estimated. The results indicated that tracking accuracy improved (mean - 48%, p<0.005 [in silico]; mean - 43%, p<0.01 [phantom]), and there was a reduction in size of the reference blocks while maintaining tracking performance (mean - 19%, p<0.01 [in vivo]). This novel method will facilitate further exploration of the longitudinal movement of the arterial wall. (C) 2014 World Federation for Ultrasound in Medicine & Biology.

New microscopy techniques are continuously developed, resulting in more rapid acquisition of large amounts of data. Manual analysis of such data is extremely time-consuming and many features are difficult to quantify without the aid of a computer. But with automated image analysis biologists can extract quantitative measurements and increases throughput significantly, which becomes particularly important in high-throughput screening (HTS). This thesis addresses automation of traditional analysis of cell data as well as automation of both image capture and analysis in zebrafish high-throughput screening.

It is common in microscopy images to stain the nuclei in the cells, and to label the DNA and proteins in different ways. Padlock-probing and proximity ligation are highly specific detection methods that produce point-like signals within the cells. Accurate signal detection and segmentation is often a key step in analysis of these types of images. Cells in a sample will always show some degree of variation in DNA and protein expression and to quantify these variations each cell has to be analyzed individually. This thesis presents development and evaluation of single cell analysis on a range of different types of image data. In addition, we present a novel method for signal detection in three dimensions.

HTS systems often use a combination of microscopy and image analysis to analyze cell-based samples. However, many diseases and biological pathways can be better studied in whole animals, particularly those that involve organ systems and multi-cellular interactions. The zebrafish is a widely-used vertebrate model of human organ function and development. Our collaborators have developed a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae. This thesis presents improvements to the system, including accurate positioning of the fish which incorporates methods for detecting regions of interest, making the system fully automatic. Furthermore, the thesis describes a novel high-throughput tomography system for screening live zebrafish in both fluorescence and bright field microscopy. This 3D imaging approach combined with automatic quantification of morphological changes enables previously intractable high-throughput screening of vertebrate model organisms.

Microscopy in combination with image analysis has emerged as one of the most powerful and informativeways to analyze cell-based high-throughput screening (HTS) samples in experiments designed to uncover novel drugs and drug targets. However, many diseases and biological pathways can be better studied in whole animals, particularly diseases and pathways that involve organ systems and multicellular interactions, such as organ development, neuronal degeneration and regeneration, cancer metastasis, infectious disease progression and pathogenesis. The zebrafish is a wide-spread and popular vertebrate model of human organfunction and development, and it is unique in the sense that large-scale in vivo genetic and chemical studies are feasible due in part to its small size, optical transparency,and aquatic habitat. To improve the throughput and complexity of zebrafish screens, a high-throughput platform for cellular-resolution in vivo chemical and genetic screens on zebrafish larvae has been developed at Yanik lab at Research Laboratory of Electronics, MIT, USA. The system loads live zebrafish from reservoirs or multiwell plates, positions and rotates them for high-speed confocal imaging of organs,and dispenses the animals without damage. We present two improvements to the described system, including automation of positioning of the animals and a novel approach for brightfield microscopy tomographic imaging of living animals.

Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

The multilinear least-squares (MLLS) problem is an extension of the linear leastsquares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by results of numerical experiments performed for some problems related to the design of filter networks.

The multilinear least-squares (MLLS) problem is an extension of the linear least-squares problem. The difference is that a multilinearoperator is used in place of a matrix-vector product. The MLLS istypically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows formoving from one local minimizer to a better one. The efficiencyof this strategy isillustrated by results of numerical experiments performed forsome problems related to the design of filter networks.

Filter networks is a powerful tool used for reducing the image processing time, while maintaining its reasonably high quality.They are composed of sparse sub-filters whose low sparsity ensures fast image processing.The filter network design is related to solvinga sparse optimization problem where a cardinality constraint bounds above the sparsity level.In the case of sequentially connected sub-filters, which is the simplest network structure of those considered in this paper, a cardinality-constrained multilinear least-squares (MLLS) problem is to be solved. If to disregard the cardinality constraint, the MLLS is typically a large-scale problem characterized by a large number of local minimizers. Each of the local minimizers is singular and non-isolated.The cardinality constraint makes the problem even more difficult to solve.An approach for approximately solving the cardinality-constrained MLLS problem is presented.It is then applied to solving a bi-criteria optimization problem in which both thetime and quality of image processing are optimized. The developed approach is extended to designing filter networks of a more general structure. Its efficiency is demonstrated by designing certain 2D and 3D filter networks. It is also compared with the existing approaches.

The aim of this project is to keep the x-ray exposure of the patient as low as reasonably achievable while improving the diagnostic image quality for the radiologist. The means to achieve these goals is to develop and evaluate an efficient adaptive filtering (denoising/image enhancement) method that fully explores true 4D image acquisition modes.

The proposed prototype system uses a novel filter set having directional filter responses being monomials. The monomial filter concept is used both for estimation of local structure and for the anisotropic adaptive filtering. Initial tests on clinical 4D CT-heart data with ECG-gated exposure has resulted in a significant reduction of the noise level and an increased detail compared to 2D and 3D methods. Another promising feature is that the reconstruction induced streak artifacts which generally occur in low dose CT are remarkably reduced in 4D.

Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. Gradient descent methods are often used to solve this optimization problem since they are very easy to implement and applicable to general nonconvex functionals. They are, however, sensitive to local minima and often display slow convergence. Traditionally, cost functionals have been modified to avoid these problems. In this paper, we instead propose using two modified gradient descent methods, one using a momentum term and one based on resilient propagation. These methods are commonly used in the machine learning community. In a series of 2-D/3-D-experiments using real and synthetic data with ground truth, the modifications are shown to reduce the sensitivity for local optima and to increase the convergence rate. The parameter sensitivity is also investigated. The proposed methods are very simple modifications of the basic method, and are directly compatible with any type of level set implementation. Downloadable reference code with examples is available online.

To quantitatively and qualitatively evaluate the water-signal performance of the consistent intensity inhomogeneity correction (CIIC) method to correct for intensity inhomogeneities METHODS: Water-fat volumes were acquired using 1.5 Tesla (T) and 3.0T symmetrically sampled 2-point Dixon three-dimensional MRI. Two datasets: (i) 10 muscle tissue regions of interest (ROIs) from 10 subjects acquired with both 1.5T and 3.0T whole-body MRI. (ii) Seven liver tissue ROIs from 36 patients imaged using 1.5T MRI at six time points after Gd-EOB-DTPA injection. The performance of CIIC was evaluated quantitatively by analyzing its impact on the dispersion and bias of the water image ROI intensities, and qualitatively using side-by-side image comparisons.

The CPT-violation parameters Re(δ) and Im(δ) determined recently by CPLEAR are used to evaluate the K0– mass and decay-width differences, as given by the difference between the diagonal elements of the neutral-kaon mixing matrix (M−iΓ/2). The results – GeV and GeV – are consistent with CPT invariance. The CPT invariance is also shown to hold within a few times 10−3–10−4 for many of the amplitudes describing neutral-kaon decays to different final states.

We apply a forward dispersion relation to the regeneration amplitude for kaon scattering on 12" style="position: relative;" tabindex="0" id="MathJax-Element-1-Frame" class="MathJax">12C using all available data. The CPLEAR data at low energies allow the determination of the net contribution from the subthreshold region which turns out to be much smaller than earlier evaluations, solving a long standing puzzle.

Neutral-kaon decays to πeν were analysed to determine the q2 dependence of the K0e3 electroweak form factor f+. Based on 365612 events, this form factor was found to have a linear dependence on q2 with a slope λ+=0.0245±0.0012stat±0.0022syst.

The aim was to compare the osseointegration of grit-blasted implants with and without a hydrogen fluoride treatment in rat tibia and femur, and to visualize bone formation using state-of-the-art 3D visualization techniques. Grit-blasted implants were inserted in femur and tibia of 10 Sprague-Dawley rats (4 implants/rat). Four weeks after insertion, bone implant samples were retrieved. Selected samples were imaged in 3D using Synchrotron Radiation-based CT (SRCT). The 3D data was quantified and visualized using two novel visualization techniques, thread fly-through and 2D unfolding. All samples were processed to cut and ground sections and 2D histomorphometrical comparisons of bone implant contact (BIC), bone area (BA), and mirror image area (MI) were performed. BA values were statistically significantly higher for test implants than controls (p<0.05), but BIC and MI data did not differ significantly. Thus, the results partly indicate improved bone formation at blasted and hydrogen fluoride treated implants, compared to blasted implants. The 3D analysis was a valuable complement to 2D analysis, facilitating improved visualization. However, further studies are required to evaluate aspects of 3D quantitative techniques, with relation to light microscopy that traditionally is used for osseointegration studies. (c) 2014 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 103B: 12-20, 2015.

Digital pathology holds the promise of improved workflow and also of the use of image analysis to extract features from tissue samples for quantitative analysis to improve current subjective analysis of, for example, cancer tissue. But this requires fast and reliable image digitization. In this paper we address image blurriness, which is a particular problem with very large images or tissue micro arrays scanned with whole slide scanners, since autofocus methods may fail when there is a large variation in image content. We introduce a method to detect, quantify and dis-play blurriness from whole slide images (WSI) in real-time. We describe a blurriness measurement based on an ideal high pass filter in the frequency domain. In contrast with other method our method does not require any prior knowledge of the image content, and it produces a continuous blurriness map over the entire WSI. This map can be displayed as an overlay of the original data and viewed at different levels of magnification with zoom and pan features. The computation time for an entire WSI is around 5 minutes on an average workstation, which is about 180 times faster than existing methods.

The main benefit with imlook4d is that it is easily extendable with scripts, accessing exported variables such as the image matrix (4D) and a region-of-interest (ROI) matrix. Scripts are available via a menu in the imlook4d GUI, and can be used to manipulate the image-matrix and ROI data. There is also a menu option to export and import these variables to the Matlab workspace for interactive manipulation, useful for one-off fixes or for script development. There are presently about 30 scripts in categories such as ROI, Matrix, Header info etc. There is also direct export to ImageJ [1] and import back from ImageJ, thus giving access to all tools available within ImageJ.

Imlook4d has a built in volume-of-interest editor, with a brush tool for quick interactive ROI delineation, and via scripts, different ways of thresholding ROIs from parts of the image. Time activity data is saved to a tab-delimited text file.

The principal-component (PC) based Hotelling filter is an integrated part of the program, which allows for interactive noise reduction without loss of quantitation [2]. A typical work flow for a dynamic data set is to turn on the filter for ROI delineation, and then there is the choice of turning it off for export of time-activity data. Also the PC images can be used to draw ROIs on, which under some circumstances gives enhanced contrast.

Calculation of parametric pharmacokinetic modelling images can be performed interactively, calculated slice by slice as the user scrolls through the volume. Reference models for Patlak, Logan and Averaged Simple Flow Model [3] applied on 15O-water are implemented, and it is relatively easy to implement other kinetic models. Similarly, scripts have been developed for regional Patlak and Logan models on ROI data.

Background: In this paper we apply the principal-component analysis filter (Hotelling filter) to reduce noise fromdynamic positron-emission tomography (PET) patient data, for a number of different radio-tracer molecules. Wefurthermore show how preprocessing images with this filter improves parametric images created from suchdynamic sequence.We use zero-mean unit variance normalization, prior to performing a Hotelling filter on the slices of a dynamictime-series. The Scree-plot technique was used to determine which principal components to be rejected in thefilter process. This filter was applied to [11C]-acetate on heart and head-neck tumors, [18F]-FDG on liver tumors andbrain, and [11C]-Raclopride on brain. Simulations of blood and tissue regions with noise properties matched to realPET data, was used to analyze how quantitation and resolution is affected by the Hotelling filter. Summing varyingparts of a 90-frame [18F]-FDG brain scan, we created 9-frame dynamic scans with image statistics comparable to 20MBq, 60 MBq and 200 MBq injected activity. Hotelling filter performed on slices (2D) and on volumes (3D) werecompared.Results: The 2D Hotelling filter reduces noise in the tissue uptake drastically, so that it becomes simple to manuallypick out regions-of-interest from noisy data. 2D Hotelling filter introduces less bias than 3D Hotelling filter in focalRaclopride uptake. Simulations show that the Hotelling filter is sensitive to typical blood peak in PET prior to tissueuptake have commenced, introducing a negative bias in early tissue uptake. Quantitation on real dynamic data isreliable. Two examples clearly show that pre-filtering the dynamic sequence with the Hotelling filter prior toPatlak-slope calculations gives clearly improved parametric image quality. We also show that a dramatic dosereduction can be achieved for Patlak slope images without changing image quality or quantitation.Conclusions: The 2D Hotelling-filtering of dynamic PET data is a computer-efficient method that gives visuallyimproved differentiation of different tissues, which we have observed improve manual or automated regionof-interest delineation of dynamic data. Parametric Patlak images on Hotelling-filtered data display improved clarity,compared to non-filtered Patlak slope images without measurable loss of quantitation, and allow a dramaticdecrease in patient injected dose.

An automated method for the segmentation and tracking of moving vessel walls in 2D ultrasound image sequences is introduced. The method was tested on simulated and real ultrasound image sequences of the carotid artery. Tracking was achieved via a self organizing neural network known as Growing Neural Gas. This topology-preserving algorithm assigns a net of nodes connected by edges that distributes itself within the vessel walls and adapts to changes in topology with time. The movement of the nodes was analyzed to uncover the dynamics of the vessel wall. By this way, radial and longitudinal strain and strain rates have been estimated. Finally, wave intensity signals were computed from these measurements. The method proposed improves upon wave intensity wall analysis, WIWA, and opens up a possibility for easy and efficient analysis and diagnosis of vascular disease through noninvasive ultrasonic examination.

Automated tissue image analysis aims to develop algorithms for a variety of histological applications. This has important implications in the diagnostic grading of cancer such as in breast and prostate tissue, as well as in the quantification of prognostic and predictive biomarkers that may help assess the risk of recurrence and the responsiveness of tumors to endocrine therapy.

In this thesis, we use pattern recognition and image analysis techniques to solve several problems relating to histopathology and immunohistochemistry applications. In particular, we present a new method for the detection and localization of tissue microarray cores in an automated manner and compare it against conventional approaches.

We also present an unsupervised method for color decomposition based on modeling the image formation process while taking into account acquisition noise. The method is unsupervised and is able to overcome the limitation of specifying absorption spectra for the stains that require separation. This is done by estimating reference colors through fitting a Gaussian mixture model trained using expectation-maximization.

Another important factor in histopathology is the choice of stain, though it often goes unnoticed. Stain color combinations determine the extent of overlap between chromaticity clusters in color space, and this intrinsic overlap sets a main limitation on the performance of classification methods, regardless of their nature or complexity. In this thesis, we present a framework for optimizing the selection of histological stains in a manner that is aligned with the final objective of automation, rather than visual analysis.

Immunohistochemistry can facilitate the quantification of biomarkers such as estrogen, progesterone, and the human epidermal growth factor 2 receptors, in addition to Ki-67 proteins that are associated with cell growth and proliferation. As an application, we propose a method for the identification of paired antibodies based on correlating probability maps of immunostaining patterns across adjacent tissue sections.

Finally, we present a new feature descriptor for characterizing glandular structure and tissue architecture, which form an important component of Gleason and tubule-based Elston grading. The method is based on defining shape-preserving, neighborhood annuli around lumen regions and gathering quantitative and spatial data concerning the various tissue-types.

Whole-slide imaging of tissue microarrays (TMAs) holds the promise of automated image analysis of a large number of histopathological samples from a single slide. This demands high-throughput image processing to enable analysis of these tissue samples for diagnosis of cancer and other conditions. In this paper, we present a completely automated method for the accurate detection and localization of tissue cores that is based on geometric restoration of the core shapes without placing any assumptions on grid geometry. The method relies on hierarchical clustering in conjunction with the Davies-Bouldin index for cluster validation in order to estimate the number of cores in the image wherefrom we estimate the core radius and refine this estimate using morphological granulometry. The final stage of the algorithm reconstructs circular discs from core sections such that these discs cover the entire region of each core regardless of the precise shape of the core. The results show that the proposed method is able to reconstruct core locations without any evidence of localization error. Furthermore, the algorithm is more efficient than existing methods based on the Hough transform for circle detection. The algorithm's simplicity, accuracy, and computational efficiency allow for automated high-throughput analysis of microarray images.

Comparing staining patterns of paired antibodies designed towards a specific protein but toward different epitopes of the protein provides quality control over the binding and the antibodies' ability to identify the target protein correctly and exclusively. We present a method for automated quantification of immunostaining patterns for antibodies in breast tissue using the Human Protein Atlas database. In such tissue, dark brown dye 3,3'-diaminobenzidine is used as an antibody-specific stain whereas the blue dye hematoxylin is used as a counterstain. The proposed method is based on clustering and relative scaling of features following principal component analysis. Our method is able (1) to accurately segment and identify staining patterns and quantify the amount of staining and (2) to detect paired antibodies by correlating the segmentation results among different cases. Moreover, the method is simple, operating in a low-dimensional feature space, and computationally efficient which makes it suitable for high-throughput processing of tissue microarrays.

Due to the complexity of biological tissue and variations in staining procedures, features that are based on the explicit extraction of properties from subglandular structures in tissue images may have difficulty generalizing well over an unrestricted set of images and staining variations. We circumvent this problem by an implicit representation that is both robust and highly descriptive, especially when combined with a multiple instance learning approach to image classification. The new feature method is able to describe tissue architecture based on glandular structure. It is based on statistically representing the relative distribution of tissue components around lumen regions, while preserving spatial and quantitative information, as a basis for diagnosing and analyzing different areas within an image. We demonstrate the efficacy of the method in extracting discriminative features for obtaining high classification rates for tubular formation in both healthy and cancerous tissue, which is an important component in Gleason and tubule-based Elston grading. The proposed method may be used for glandular classification, also in other tissue types, in addition to general applicability as a region-based feature descriptor in image analysis where the image represents a bag with a certain label (or grade) and the region-based feature vectors represent instances.

For a polygonal domain with h holes and a total of n vertices, we present algorithms that compute the L1 geodesic diameter in O(n2+h4) time and the L1 geodesic center in O((n4+n2h4)(n)) time, where (·) denotes the inverse Ackermann function. No algorithms were known for these problems before. For the Euclidean counterpart, the best algorithms compute the geodesic diameter in O(n7.73) or O(n7(h+log n)) time, and compute the geodesic center in O(n12+) time. Therefore, our algorithms are much faster than the algorithms for the Euclidean problems. Our algorithms are based on several interesting observations on L1 shortest paths in polygonal domains.

MRI is a well known and widely spread technique to characterize cardiac pathologies due to its high spatial resolution, its accessibility and its adjustable contrast among soft tissues.

In attempt to close the gap between blood flow, myocardial movement and the cardiac fucntion, researching in the MRI field addresses the quantification of some of the most relevant blood and myocardial parameters.

During this proyect a new tool that allows reading, postprocessing, quantifying and visualizing 2D motion sense MR data has been developed. In order to analyze intracardiac blood flow and wall motion, the new tool quantifies velocity, turbulent kinetic energy, pressure and strain.

In the results section the final tool is presented, describing the visualization modes, which represent the quantified parameters both individually and combined; and detailing auxiliary tool features as masking, thresholding, zooming, and cursors.

Finally, thecnical aspects as the convenience of two dimensional examinations to create compact tools, and the challenges of masking as part of the relative pressure calculation, among others, are discussed; ending up with the proposal of some future developments.

As glial cells in the brain, astrocytes have diverse functional roles in the central nervous system. In the presence of harmful stimuli, astrocytes modify their functional and structural properties, a condition called reactive astrogliosis. Here, a protocol for assessment of the morphological properties of astrocytes is presented. This protocol includes quantification of 12 different parameters i.e. the surface area and volume of the tissue covered by an astrocyte (astrocyte territory), the entire astrocyte including branches, cell body, and nucleus, as well as total length and number of branches, the intensity of fluorescence immunoreactivity of antibodies used for astrocyte detection, and astrocyte density (number/1,000 mu m(2)). For this purpose three-dimensional (3D) confocal microscopic images were created, and 3D image analysis software such as Volocity 6.3 was used for measurements. Rat brain tissue exposed to amyloid beta(1-40) (A beta(1-40)) with or without a therapeutic intervention was used to present the method. This protocol can also be used for 3D morphometric analysis of other cells from either in vivo or in vitro conditions.

In last decade increasingly mathematical models of tumor growths have been studied, particularly on solid tumors which growth mainly caused by cellular proliferation. In this paper we propose a modified model to simulate the growth of gliomas in different stages. Glioma growth is modeled by a reaction-advection-diffusion. We begin with a model of untreated gliomas and continue with models of polyclonal glioma following chemotherapy. From relatively simple assumptions involving homogeneous brain tissue bounded by a few gross anatomical landmarks (ventricles and skull) the models have been expanded to include heterogeneous brain tissue with different motilities of glioma cells in grey and white matter. Tumor growth is characterized by a dangerous change in the control mechanisms, which normally maintain a balance between the rate of proliferation and the rate of apoptosis (controlled cell death). Result shows that this model closes to clinical finding and can simulate brain tumor behavior properly.

OBJECTIVES: We have developed a method using transthoracic echocardiography in establishing optimal visualization of the aortic root, to reduce the amount of contrast medium used in each patient.

BACKGROUND: During transcatheter aortic valve implantation, it is necessary to obtain an optimal fluoroscopic projection for deployment of the valve showing the aortic ostium with the three cusps aligned in the beam direction. This may require repeat aortic root angiograms at this stage of the procedure with a high amount of contrast medium with a risk of detrimental influence on renal function.

METHODS: We studied the conventional way and an echo guided way to optimize visualisation of the aortic root. Echocardiography was used initially allowing easier alignment of the image intensifier with the transducer's direction.

RESULTS: Contrast volumes, radiation/fluoroscopy exposure times, and postoperative creatinine levels were significantly less in patients having the echo-guided orientation of the optimal fluoroscopic angles compared with patients treated with the conventional approach.

CONCLUSION: We present a user-friendly echo-guided method to facilitate fluoroscopy adjustment during transcatheter aortic valve implantation. In our series, the amounts of contrast medium and radiation have been significantly reduced, with a concomitant reduction in detrimental effects on renal function in the early postoperative phase.

There is a growing interest to get a fully MR based radiotherapy. The most important development needed is to obtain improved bone tissue estimation. Existing model-based methods have performed poorly on bone tissues. This paper aims to obtainimproved estimation of bone tissues. Skew-Gaussian mixture model (SGMM) isproposed to further investigate CT image estimation from MR images. The estimation quality of the proposed model is evaluated using leave-one-out cross-validation method on real data. In comparison with the existing model-based approaches, the approach utilized in this paper outperforms in estimation of bone tissues, especiallyon dense bone tissues.

Cervical cancer is one of the most deadly and common forms of cancer among women if no action is taken to prevent it, yet it is preventable through a simple screening test, the so-called PAP-smear. This is the most effective cancer prevention measure developed so far. But the visual examination of the smears is time consuming and expensive and there have been numerous attempts at automating the analysis ever since the test was introduced more than 60 years ago. The first commercial systems for automated analysis of the cell samples appeared around the turn of the millennium but they have had limited impact on the screening costs. In this paper we examine the key issues that need to be addressed when an automated analysis system is developed and discuss how these challenges have been met over the years. The lessons learned may be useful in the efforts to create a cost-effective screening system that could make affordable screening for cervical cancer available for all women globally, thus preventing most of the quarter million annual unnecessary deaths still caused by this disease.

Spectral imaging is the acquisition of multiple images of an object at different energy spectra. In mammography, dual-energy imaging (spectral imaging with two energy levels) has been investigated for several applications, in particular material decomposition, which allows for quantitative analysis of breast composition and quantitative contrast-enhanced imaging. Material decomposition with dual-energy imaging is based on the assumption that there are two dominant photon interaction effects that determine linear attenuation: the photoelectric effect and Compton scattering. This assumption limits the number of basis materials, i.e. the number of materials that are possible to differentiate between, to two. However, Rayleigh scattering may account for more than 10% of the linear attenuation in the mammography energy range. In this work, we show that a modified version of a scanning multi-slit spectral photon-counting mammography system is able to acquire three images at different spectra and can be used for triple-energy imaging. We further show that triple-energy imaging in combination with the efficient scatter rejection of the system enables measurement of Rayleigh scattering, which adds an additional energy dependency to the linear attenuation and enables material decomposition with three basis materials. Three available basis materials have the potential to improve virtually all applications of spectral imaging.