Appearance-based mapping algorithms recognize places to localize a mobile robot during indoor and outdoor operations. In this respect, vision sensors are used as they offer ample features to uniquely represent the scenes. The FAB-MAP approach uses hand-engineered features (e.g. SIFT, SURF, etc.) to assist the localization of a robot by detecting loop-closures. However, these features are sensitive to illumination changes, therefore, the FAB-MAP approach produces false loop-closures in cases where a place has an undergone illumination or appearance change. This often leads to a failure in appearance-based mapping. With the advancement of artificial intelligence, the Convolutional Neural Network (CNN) for place recognition has drawn a huge attention in computer vision due to its robust performance against these kind of changes. This master thesis is aimed at integrating features obtained from CNN into the FAB-MAP approach to improve loop-closure detection results. The performance of the proposed method will be evaluated by comparing the precision–recall curve and average precision with the original FAB-MAP approach.