ABSTRACT

A novel Position-Invariant Robust Feature, designated as PIRF, is presented to address the problem of highly dynamic scene recognition. The PIRF is obtained by identifying existing local features (i.e. SIFT) that have a wide baseline visibility within a place (one place contains more than one sequential images). These wide-baseline visible features are then represented as a single PIRF, which is computed as an average of all descriptors associated with the PIRF. Particularly, PIRFs are robust against highly dynamical changes in scene: a single PIRF can be matched correctly against many features from many dynamical images. This paper also describes an approach to using these features for scene recognition. Recognition proceeds by matching an individual PIRF to a set of features from test images, with subsequent majority voting to identify a place with the highest matched PIRF. The PIRF system is trained and tested on 2000+ outdoor omnidirectional images and on COLD datasets. Despite its simplicity, PIRF offers a markedly better rate of recognition for dynamic outdoor scenes (ca. 90%) than the use of other features. Additionally, a robot navigation system based on PIRF (PIRF-Nav) can outperform other incremental topological mapping methods in terms of time (70% less) and memory. The number of PIRFs can be reduced further to reduce the time while retaining high accuracy, which makes it suitable for long-term recognition and localization.

ABSTRACT

This paper describes a new visual feature to especially address the problem of highly dynamic place recognition. The feature is obtained by identifying existing local features, such as SIFT or SURF, that have wide baseline visibility within the place. These identified local features are then compressed into a single representative feature, a wide-baseline visible feature, which is computed as an average of all the features associated with it. The proposed feature is especially robust against highly dynamical changes in scene; it can be correctly matched against a number of features collected from many dynamic images. This paper also describes an approach to using these features for scene recognition. The recognition proceeds by matching individual feature to a set of features from testing images, followed by majority voting to identify a place with the highest matched features. The proposed feature is trained and tested on 2000+ outdoor omnidirectional. Despite its simplicity, wide-baseline visible feature offers two times better rate of recognition (ca. 93%) than other features. The number of features can be further reduced to speed up the time without dropping in accuracy, which makes it more suitable to long-term scene recognition and localization.

ABSTRACT

In this paper we present a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets in a similar manner as in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset was collected on 2 days with different specific events, i.e. an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about twice as high (approximately 80% increase) than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over the long term and can solve the kidnapped robot problem.

ABSTRACT

This paper presents a fast and online incremental solution for an appearance-based loop-closure detection problem in a dynamic indoor environment. Closing the loop in a dynamic environment has been an important topic in robotics for decades. Recently, PIRF-Nav has been reported as being successful in achieving high recall rate at precision 1. However, PIRF-Nav has three main disadvantages: (i) the computational expense of PIRF-Nav is beyond real-time, (ii) it utilizes a large amount of memory in the redundant process of keeping signatures of places, and (iii) it is ill-suited to an indoor environment. These factors hinder the use of PIRF-Nav in a general environment for long-term, high-speed mobile robotic applications. Therefore, this paper proposes two techniques: (i) new modified PIRF extraction that makes the system more suitable for an indoor environment and (ii) new dictionary management that can eliminate redundant searching and conserve memory consumption. The results show that our proposed method can complete tasks up to 12 times faster than PIRF-Nav with only a slight percentage decline in recall. In addition, we collected additional data from a university canteen crowded during lunch time. Even in this crowded indoor environment, our proposed method has better real-time processing performance compared with other methods.

ABSTRACT

In this paper we present a novel method for online and incremental appearance-based localization and mapping in a highly dynamic environment. Using position-invariant robust features (PIRFs), the method can achieve a high rate of recall with 100% precision. It can handle both strong perceptual aliasing and dynamic changes of places efficiently. Its performance also extends beyond conventional images; it is applicable to omnidirectional images for which the major portions of scenes are similar for most places. The proposed PIRF-based Navigation method named PIRF-Nav is evaluated by testing it on two standard datasets in a similar manner as in FAB-MAP and on an additional omnidirectional image dataset that we collected. This extra dataset was collected on 2 days with different specific events, i.e. an open-campus event, to present challenges related to illumination variance and strong dynamic changes, and to test assessment of dynamic scene changes. Results show that PIRF-Nav outperforms FAB-MAP; PIRF-Nav at precision-1 yields a recall rate about twice as high (approximately 80% increase) than that of FAB-MAP. Its computation time is sufficiently short for real-time applications. The method is fully incremental, and requires no offline process for dictionary creation. Additional testing using combined datasets proves that PIRF-Nav can function over the long term and can solve the kidnapped robot problem.

ABSTRACT

This paper presents a novel use of feature sharing in an appearance-based simultaneous localization and mapping (SLAM) system for robots. Feature sharing was inspired by man-made settings such as offices and houses, in which many similar objects are repeatedly shown in the same environment. With this concept, we can expect better performance with lower memory consumption. Combining this concept with Position Invariant Robust Features (PIRFs), we can improve both accuracy and processing time. Our system is fully online and incremental. Our experiments was done on two well-known datasets, the City Centre dataset and Lip6Indoor dataset. Moreover, we tested our system on crowded university canteen at lunch time for more dynamic environment. The results showed that our system has outstanding accuracy and less processing time compared to FAB-MAP and fast and incremental bag of words which are considered the state-of-the-art offline and online appearance-based SLAM system, respectively.

ABSTRACT

This paper describes a new data partitioning technique for used with a visual SLAM system. Combined with the existing SLAM system, the technique surveys areas to which the input image might belong to. It then retrieves matched images from such areas. The proposed technique can run in parallel with a normal SLAM system, such as FAB-MAP, in an unsupervised and incremental manner. We also introduce usage of Position-Invariant Robust Features (PIRFs) to make the system robust to dynamic changes in scenes such as moving objects. Combining our technique with normal SLAM can markedly increase the localization recall rate. Experiment results showed that the FAB-MAP result recall rate can increase to 30% at the same precision.

ABSTRACT

A vision-based mobile robot’s simultaneous localization and mapping (SLAM) and navigation has been the source of countless research contributions because of rich sensory output and cost effectiveness of vision sensors. However, existing methods of vision-based SLAM and navigation are not effective for robots to be used in crowded environments such as train stations and shopping malls, because when we extract feature points from an image in crowded environments, many feature points are extracted from not only static objects but also dynamic objects such as humans. By recognizing all such feature points as landmarks, the algorithm collapses and errors occur in map building and self-localization. In this paper, we propose a SLAM and navigation method that is effective even in crowded environments by extracting robust 3D feature points from sequential vision images and odometry. By using the proposed method, we can eliminate unstable feature points extracted from dynamic objects and perform SLAM and navigation stably. We present experiments showing the utility of our approach in crowded environments, including map building and navigation.

ABSTRACT

Existing methods of SLAM are not enough for robots needed to live in crowded environments such as stations and shopping mall. In this paper, we propose a SLAM and navigation method which is robust in the crowded environments.