Track accepted paper

CiteScore: 3.61ℹ
CiteScore measures the average citations received per document published in this title. CiteScore values are based on citation counts in a given year (e.g. 2015) to documents published in three previous calendar years (e.g. 2012 – 14), divided by the number of documents in these three previous years (e.g. 2012 – 14).

Impact Factor: 3.317ℹImpact Factor:2016: 3.317The Impact Factor measures the average number of citations received in a particular year by papers published in the journal during the two preceding years.
2017 Journal Citation Reports (Clarivate Analytics, 2018)

5-Year Impact Factor: 3.211ℹFive-Year Impact Factor:2016: 3.211To calculate the five year Impact Factor, citations are counted in 2016 to the previous five years and divided by the source items published in the previous five years.
2017 Journal Citation Reports (Clarivate Analytics, 2018)

Source Normalized Impact per Paper (SNIP): 1.589ℹSource Normalized Impact per Paper (SNIP):2016: 1.589SNIP measures contextual citation impact by weighting citations based on the total number of citations in a subject field.

SCImago Journal Rank (SJR): 0.968ℹSCImago Journal Rank (SJR):2016: 0.968SJR is a prestige metric based on the idea that not all citations are the same. SJR uses a similar algorithm as the Google page rank; it provides a quantitative and a qualitative measure of the journal’s impact.

Author StatsℹAuthor Stats:Publishing your article with us has many benefits, such as having access to a personal dashboard: citation and usage data on your publications in one place. This free service is available to anyone who has published and whose publication is in Scopus.

Call for Papers

In medical image analysis, the accurate diagnosis of a disease depends on two aspects: medical image acquisition and medical image interpretation. Medical image acquisition has grown substantially over recent years, with devices acquiring data at faster rates and increased resolution. The medical image interpretation has only recently begun to benefit from computer technology, and most interpretations on medical images are performed by physicians. However, image interpretation by humans is limited due to its subjectivity.

Recent years have seen a growing number of publications reporting on neural networks (NNs) due to their extensive applications in a broad range of areas such as, repetitive learning, classification of patterns, nonlinear control, adaptive control, image processing, and so forth. For real-world engineering, complex dynamics coming from multiplicative noises, data missing and communication delays are commonly unavoidable in various applications of NNs. These complex dynamics have a major impact on the dynamical behaviour and the precision of state estimation, and can be further regarded as a crucial source of negative effects such as periodic oscillation, divergence and even chaos. As such, to date, much research effort has been devoted to the dynamic performance analysis, and a variety of efficient approaches have been proposed in the published literature.

The goal of image super-resolution (SR) is to restore a visually pleasinghigh-resolution (HR) image from a low-resolution (LR) input image or video sequence. HR imageshave higher pixel densities and finer details than LR images. Image SR has been proved to be of great significance in many applications, such as video surveillance, ultra-high definitionTV, low-resolution face recognition and remote sensing imaging. Benefiting from its broad application prospects, SRhas attracted huge interest, and currently is oneof the most active research topics in image processing and computer vision.

Recently, deep learning has become one of the core technologies of computer vision and artificial intelligence. Deep learning is a data-driven technology and its performance heavily relies on large-scale labeled data, e.g., ImageNet and MS COCO. Unfortunately, it is rather expensive to collect and annotate large-scale image data from the real world, the collected real images are limited in covering complex environmental conditions, and the real scenes are uncontrollable and unrepeatable.

During the past decade, large-scale multimedia data (e.g., video, images, audios, text) can be easily collected in different fields and pattern discovery from these raw data has been attracting increasing interests in the multimedia domain. Semantically understanding multimedia data can substantially enhance their practical applications.

Neural networks (NNs) and deep learning (DL) currently provide the best solutions to many problems in image recognition, speech recognition, natural language processing, control and precision health. NN and DL make the artificial intelligence (AI) much closer to human thinking modes.

Artificial intelligence (AI) is a comprehensive area of study consisting of numerous subjects including intelligent search, machine learning, knowledge management, pattern recognition, uncertain management, neural networks, and so forth. With the development of big data and deep learning, AI has become a subject of board and current interest; recent key breakthroughs in information technology especially in computation ability are often related to AI, and becomes a key factor to advance the development of AI. Traditional AI technologies have challenges in processing massive data, large-scale communication as well as collaboration, and collaborative computing of various algorithms. To meet these challenges, parallel computing has been introduced.

The variety of data in real life exhibits structure or connection property in nature. Typical data includes world-wide-web data, biological data, social network data, image data, and so on. Graph provides a natural way to represent and analyze the structure in these types of data, but the related algorithms usually suffer from a high computational and/or storage complexity, and some of them are even NP-complete problems.

Living in the era of big data, we have been witnessing the dramatic growth of heterogeneous data, which consists of a complex set of cross-media content, such as text, images, videos, audio, graphics, time series sequences, and so on. Such hybrid data comes from multiple sources and hence embodies different feature spaces. This situation is creating new challenges for the design of effective algorithms and developing generalized frameworks to meet heterogeneous computing requirements. Meanwhile, deep learning is revolutionizing diverse key application areas, such as speech recognition, object detection, image classification, and machine translation, with its data-driven representation learning.

Deep Neural networks have become a crucial technology in the field of multimedia community. They have been exploited in a series of multimedia tasks, such as multimedia content analysis and understanding, retrieval, compression, and transmission. For example, the neural networks Deep Boltzmann Machine (DBM) and Deep Auto-Encoder (DAE) have been widely used for multimodal learning and cross-modal retrieval. The Convolutional Neural Networks (CNN) and their variants have become the basic tools for building deep representations to perceive and understand multimodal information, such as images and audios. Recurrent Neural Networks (RNN) or Long-Short Term Memory (LSTM) can be used for sequence modeling and prediction for high-level semantic data like natural language.

As the famous slogan “Connecting People” indicates, a lot of the developments in novel technologies intensify the relationship between people without necessarily enhance technologies that are close to the nature of human beings. Examples can be easily found in recent computing paradigms, such as Cloud Computing that advances network infrastructure for data storage and resource sharing, or the Internet of Things that investigates the intelligence and awareness of objects involved in the network.