[ANGLÈS] Floor segmentation is a challenging problem in image processing. It has a wide range of applications in the engineering field. In mobile robot navigation systems, detecting which pixels belong to the floor is crucial for guiding the robot within an environment, defining the geometry of the scene, or avoiding obstacles. This report presents a floor segmentation algorithm for indoor scenarios that works with single grey-scale images. The portion of the floor closest to the camera is segmented by judiciously joining a set of horizontal and vertical lines, previously detected. Unlike similar methods in the literature, it does not rely on computing the vanishing point and, thus, it adapts faster to changes in camera motion and is not restricted to typical corridor scenes. A second contribution of this thesis project is the moving features detection for points within the segmented floor area. Based on the camera ego-motion, the expected motion of the points on the ground plane is computed and used for rejecting feature points that belong to movable obstacles. A key point of the designed method is its ability to deal with general motion of the camera. The implemented techniques are to be integrated in a visual-aided inertial navigation system (INS) that combines visual and inertial information. This INS requires a certain number of feature point correspondences on the ground plane to correct data from an inertial measurement unit (IMU) and estimate the ego-motion of the camera. Hence, segmenting the floor region and detecting movable features become relevant tasks in order to ensure that the considered features do belong to the ground.