Class incremental learning refers to a special multi-class classification task, in
which the number of classes is not fixed but is increasing with the continual arrival
of new data. Existing researches mainly focused on solving catastrophic forgetting
problem in class incremental learning. To this end, however, these models still
require the old classes cached in the auxiliary data structure or models, which is
inefficient in space or time. In this paper, it is the first time to discuss the difficulty
without support of old classes in class incremental learning, which is called as
softmax suppression problem. To address these challenges, we develop a new
model named Label Mapping with Response Consolidation (LMRC), which need
not access the old classes anymore. We propose the Label Mapping algorithm
combined with the multi-head neural network for mitigating the softmax suppression
problem, and propose the Response Consolidation method to overcome the
catastrophic forgetting problem. Experimental results on the benchmark datasets
show that our proposed method achieves much better performance compared to the
related methods in different scenarios.

Nowadays autonomous technologies are a very heavily explored area and particularly computer vision as the main component of vehicle perception. The quality of the whole vision system based on neural networks relies on the dataset it was trained on. It is extremely difficult to find traffic sign datasets from most of the counties of the world. Meaning autonomous vehicle from the USA will not be able to drive though Lithuania recognizing all road signs on the way. In this paper, we propose a solution on how to update model using a small dataset from the country vehicle will be used in. It is important to mention that is not panacea, rather small upgrade which can boost autonomous car development in countries with limited data access. We achieved about 10 percent quality raise and expect even better results during future experiments.

CODEQ is a new, population-based meta-heuristic algorithm that is a hybrid of concepts from chaotic search, opposition-based learning, differential evolution and quantum mechanics. CODEQ has successfully been used to solve different types of problems (e.g. constrained, integer-programming, engineering) with excellent results. In this paper, CODEQ is used to train feed-forward neural networks. The proposed method is compared with particle swarm optimization and differential evolution algorithms on three data sets with encouraging results.

Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms.

Personalized health monitoring is slowly becoming a reality due to advances in small, high-fidelity sensors, low-power processors, as well as energy harvesting techniques. The ability to efficiently and effectively process this data and extract useful information is of the utmost importance. In this paper, we aim at dealing with this challenge for the application of automated seizure detection. We explore the use of a variety of representations and machine learning algorithms to the particular task of seizure detection in high-resolution, multi-channel EEG data. In doing so, we explore the classification accuracy, computational complexity and memory requirements with a view toward understanding which approaches are most suitable. In particular, we show that layered learning approaches such as Deep Belief Networks excel along these dimensions.