Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To handle these problems, we propose a novel motion-based navigation method in contrast with appearance-based approaches. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and robust feature matching to recognize home and destination locations. Experimental results demonstrate the capability of the vision-based autonomous navigation against environment changes.