Various methods have been proposed to reduce speckle noise, which decreases image quality in ultrasound images. The Field II simulated cyst image consists of three classes and is used to compare a proposed despeckle filter with other well-known filters. The ultrasound despeckling assessment index (USDSAI) is a metric used to evaluate the proposed despeckling filters for the cyst image. This metric should be used when different regions are properly defined. In this study, the authors first analysed the performance of USDSAI for the cyst image. Then, the authors modified the USDSAI by proposing a new metric for the background class of the cyst image and evaluated its performance. The results show that the authors’ proposed metric has better performance than USDSAI.

The authors propose a new application-specific, post-acquisition quality evaluation method for brain magnetic resonance imaging (MRI) images. The domain of a MRI slice is regarded as universal set. Four feature images; greyscale, local entropy, local contrast and local standard deviation are extracted from the slice and transformed into the binary domain. Each feature image is regarded as a set enclosed by the universal set. Four qualities attribute; lightness, contrast, sharpness and texture details are described by four different combinations of feature sets. In an ideal MRI slice, the four feature sets are identically equal. Degree of distortion in real MRI slice is quantified by fidelity between the sets that describe a quality attribute. Noise is the fifth quality attribute and is described by the slice Euler number region property. Total quality score is the weighted sum of the five quality scores. The authors' proposed method addresses current challenges in image quality evaluation. It is simple, easy-to-use and easy-to-understand. Incorporation of binary transformation in the proposed method reduces computational and operational complexity of the algorithm. They provide experimental results that demonstrate efficacy of their proposed method on good quality images and on common distortions in MRI images of the brain.

The in-loop filter comprises deblocking filter and sample adaptive offset filter, which is an important module for improving image quality in a high-efficiency video coding (HEVC) decoder. The in-loop filter has a high computational complexity that accounts for ∼20% of the HEVC decoding computing load. Furthermore, it is difficult to implement a high-performing in-loop filter due to its large conditional processing requirement. First, this study presents a novel reconfigurable HEVC in-loop filter implementation on a coarse-grained dynamically reconfigurable processing unit. Next, a repartition scheme is presented that allows the in-loop filter implementation at a coding tree unit along with the other decoding modules in the HEVC decoder, which satisfies requirements of low latency applications. Finally, a hierarchised-pipeline and synchronised-parallel technique is used to improve performance by eliminating data hazards in pipeline techniques and synchronisation problems in parallel techniques. Implementation results show that the presented HEVC in-loop filter performs up to 1920 × 1080@52 frames per second at 250 MHz. The throughput is 67.5 × 9 × more than solutions based on digital signal processor and general-purpose processor, respectively.

Glaucoma is a group of eye disorders that damage the optic nerve. Considering a single eye condition for the diagnosis of glaucoma has failed to detect all glaucoma cases accurately. A reliable computer-aided diagnosis system is proposed based on a novel combination of hybrid structural and textural features. The system improves the decision-making process after analysing a variety of glaucoma conditions. It consists of two main modules hybrid structural feature-set (HSF) and hybrid texture feature-set (HTF). HSF module can classify a sample using support vector machine (SVM) from different structural glaucoma condition and the HTF module analyses the sample founded on various texture and intensity-based features and again using SVM makes a decision. In the case of any conflict in the results of both modules, a suspected class is introduced. A novel algorithm to compute the super-pixels has also been proposed to detect the damaged cup. This feature alone outperformed the current state-of-the-art methods with 94% sensitivity. Cup-to-disc ratio calculation method for cup and disc segmentation, involving two different channels has been introduced increasing the overall accuracy. The proposed system has given exceptional results with 100% accuracy for glaucoma referral.

Recent advances in computational power have made it possible to use iterative reconstruction (IR) algorithms in clinics for computed tomographic (CT) imaging. Many researchers prefer IR methods to analytical methods because they reduce radiation, image noise, and artefacts. Simultaneous Iterative Reconstruction Technique (SIRT) reduces the number of views needed for CT reconstruction. However, reconstructed images include ray artefacts that can make diagnosis difficult. This study proposes a modified IR algorithm for fast, high-quality CT reconstruction. The modified method incorporates geometric non-linear diffusion in the reconstruction estimate to minimise ray artefacts. This method also converges the algorithms into global minima much faster than other methods, using the minimum number of iterations. To meet the high computational demand of improved IR algorithms, a graphics processing unit was used in this study. The authors expect that the proposed technique can be used to reconstruct high-quality CT images faster and with minimal iterations.

According to the characteristic of pepper-and-salt noise, the authors first classify pixels in a polluted image into two classes: suspected noise and noise-free pixels. For a suspected noisy pixel, by counting the number of closed grey-level and noise-free pixels in a neighbourhood, one can correctly determine a noise or a noise-free pixel. Noise filtering does not process noise-free pixels. For the noisy pixels, an adaptive filtering algorithm with weighting mean based on Euler distance achieves excellent noise removal and good detail preservation. The algorithm can handle different noise levels, and the authors do not need to manually adjust the parameters and thresholds. The experimental results indicate that the authors’ proposed method effectively filters pepper-and-salt noise. The authors note that when noise-free and noisy pixels with the same grey level appear in the polluted images, the noise-removal performance by the proposed method is much more excellent than those of the other existing methods.

As the latest video coding standard, high-efficiency video coding (HEVC) achieves better performance and supports higher resolution compared with the predecessor standard, H.264/advanced video coding (AVC). Intra-coding is an important feature in HEVC standard, which reduces the spatial redundancy significantly, due to the flexible coding structure, and high density of angular prediction modes. However, the improvement on coding efficiency is obtained at the expense of the extraordinary computation complexity. This study presents a novel coding unit (CU) partitioning technique for HEVC. By using a fast texture complexity detection method, which is based on two-dimensional Haar wavelet transform, texture complexity for each CU can be extracted. According to the Haar wavelet coefficients obtained, an early CU splitting termination is proposed to decide whether a CU should be decomposed into four lower dimensions CUs or not. Experimental results demonstrate that the fast CU partition strategy achieves better trade-off between rate-distortion performance and complexity reduction than the previous algorithms. Compared with the reference software HM16.7, the proposed algorithm can lessen the encoding time up to 46.22% on average, with a negligible bit rate increase of 0.45%, and quality losses lower than 0.04 dB, respectively.

One key issue in content-based image retrieval is to extract effective features so as to represent the visual content of an image. In this study, a new non-negative sparse feature learning approach to produce a holistic image representation based on low-level local features is presented. Specifically, a modified spectral clustering method is introduced to learn a non-negative visual dictionary from local features of training images. A non-negative sparse feature encoding method termed non-negative locality-constrained linear coding (NNLLC) is proposed to improve the popular locality-constrained linear coding method so as to obtain more meaningful and interpretable sparse codes for feature representation. Moreover, a new feature pooling strategy named kMaxSum pooling is proposed to alleviate the information loss of the sum pooling or max pooling strategy, which produces a more effective holistic image representation and can be viewed as a generalisation of the sum and max pooling strategies. The retrieval results carried out on two public image databases demonstrate the effectiveness of the proposed approach.

Compared with classic integer-order calculus, fractional calculus is a more powerful mathematical method that non-linearly preserves and enhances image features in different frequency bands. In order to extend fractional-in-space diffusion scheme with matrix-valued diffusivity to perform superior image inpainting, the authors build the new fractional-order tensor regularisation (FTR) model by utilising the newly defined fractional-order structure tensor (FST) to control the regularisation process. The proposed model is derived as a process that minimises a functional proportional to the FST composed of the inner product of the fractional derivative vector and its transposition; hence, the new model not only inherits genuine anisotropism of tensor regularisation, but is also better equipped to handle subtle details and complex structures because of the characteristics of fractional calculus. To minimise the proposed functional, the corresponding Euler–Lagrange equation is deduced, and the anisotropism of the proposed model is analysed accordingly. Fractional-order derivative masks in positive x and y directions and negative x and y directions are implemented according to the shifted Grümwald–Letnikov definition, and a proper iterative numerical scheme is analysed. According to experimental results on various test images, the proposed FTR inpainting model demonstrates superior inpainting performance both in noiseless and noisy scenarios.

Copy–move forgery is one of the most preliminary and prevalent forms of modification attack on digital images. In this form of forgery, region(s) of an image is(are) copied and pasted onto itself, and subsequently the forged image is processed appropriately to hide the effects of forgery. State-of-the-art copy–move forgery detection techniques for digital images are primarily motivated toward finding duplicate regions in an image. The last decade has seen lot of research advancement in the area of digital image forensics, whereby the investigation for possible forgeries is solely based on post-processing of images. In this study, the authors present a three-way classification of state-of-the-art digital forensic techniques, along with a complete survey of their operating principles. In addition, they analyse the schemes and evaluate and compare their performances in terms of a proposed set of parameters, which may be used as a standard benchmark for evaluating the efficiency of any general copy–move forgery detection technique for digital images. The comparison results provided by them would help a user to select the most optimal forgery detection technique, depending on the author requirements.

In this study, the authors introduce a new and efficient method to classify texture images. From the histogram of the Radon transform, a texture orientation matrix is obtained and combined with a texton matrix for generating a new type of co-occurrence matrix. From the co-occurrences matrix, 20 statistical features for texture images classification have been extracted: seven statistics of the first-level order and 13 of the second-level one. K-Nearest neighbour and support vector machine models are used for classification. The proposed approach has been tested on widely used texture datasets (Brodatz and University KTH Royal Institute of Technology Textures under varying Illumination, Pose and Scale) and compared with several different alternative methods. The experimental results show a very high-accuracy level, confirming the strength of the developed method which overcomes the state-of-the-art methods for texture classification.

In many image processing analysis, it is important to significantly reduce the noise level. This study aims at introducing an efficient method for this purpose based on generalised Cauchy (GC) distribution. Therefore, some characteristics of GC distribution is considered. In particular, the characteristic function of a GC distribution is derived by using the theory of positive definite densities and utilising the density of a GC random variable as the characteristic function of a convolution of two generalised non-symmetric Linnik variables. Further, GC distribution is considered as a filter and in the proposed method for image noise reduction the optimal parameters of GC filter is defined by using the particle swarm optimisation. The proposed method is applied to different types of noisy images and the obtained results are compared with four state-of-the-art denoising algorithms. Experimental results confirm that their method could significantly reduce the noise effect.

Intuitionistic fuzzy sets (IFSs), rough sets are efficient tools to handle uncertainty and vagueness present in images and recently are combined to segment medical images in the presence of noise and intensity non homogeneity (INU). These hybrid algorithms are sensitive to initial centroids, parameter tuning and dependency with the fuzzy membership function to define the IFS. In this paper, a novel clustering algorithm, namely generalized rough intutionistic fuzzy c-means (GRIFCM) is proposed for brain magnetic resonance (MR) image segmentation avoiding the dependency with the fuzzy membership function. In this algorithm, each pixel is categorized into three rough regions based on the thresholds obtained by the image data by minimizing the noise. These regions are used to create IFS. The distance measure based on IFS eliminate's the influence of noise and INU present in the image producing accurate brain tissue segmentation. The proposed algorithm is evaluated through simulation and compared it with existing k-means (KM), fuzzy c-means (FCM), Rough fuzzy c-means (RFCM), Generalized rough fuzzy c-means (GRFCM), soft rough fuzzy c-means (SRFCM) and rough intuitionistic fuzzy c-means (RIFCM) algorithms. Experimental results prove the superiority of the proposed algorithm over the considered algorithms in all analyzed scenarios.

The visible light camera-based long-range surveillance always suffers from the complex atmosphere. When applying some traditional image enhancement methods, the computational effects behave limited because of their poor environment adaptability. To conquer that problem, a blind image quality (IQ) learning-based multiscale Retinex, i.e. the IQ-learning multiscale Retinex, is proposed. First, a series of typical degenerated images are collected. Second, several blind IQ evaluation metrics are computed for the dataset above. They are the image brightness degree, the image region contrast degree, the image edge blur degree, the image colour quality degree, and the image noise degree. Third, a wavelet transform multi-scale Retinex (WT_MSR) is used to carry out the basic image enhancement. A kind of optimal enhancement is implemented by the subjective evaluation and tuning of multiple optimal control parameters (MOCPs) of WT_MSR for these degenerated dataset. Fourth, the back propagation neural network (BPNN) is used to build a connection between the IQ metrics and the MOCPs. Finally, when a new image is captured, this system will compute its IQ metrics and estimate the MOCPs for the WT_MSR by BPNN; then a kind of optimal enhancement can be realised. Many outdoor applications have shown the effectiveness of proposed method.

A new contouring method for producing region boundaries in two-dimensional (2D) scalar-value image datasets (such as grey-scale intensity images from a digital camera or X-ray device) with sub-pixel precision is introduced here. The method, fine feature sensitive marching squares (FFS-MS), extends MS isocontouring to produce an isocontour that preserves fine-scale features (which are often incorrectly recovered by standard MS). This extension is the 2D analogue of Kaneko and Yamamoto's volume preserving marching cubes algorithm. It has several phases. First, it recovers an isocontour using standard MS. Then, it produces a new dataset with data values estimated by treating the recovered contour as the actual boundary. Using this new dataset, it next compares that dataset's estimated data values with the data values at corresponding locations in the original dataset. Finally, the method adjusts the original dataset's pixel values at every pixel location, where there is a high discrepancy between the original data value and the estimated data value. It iteratively repeats its phases until an optimality criterion is satisfied. Experimental analyses of FFS-MS are also presented. The analyses focus on FF recovery in comparison with the standard MS.