Abstract : In this paper we present a system for vision-based planning and execution of fingertip grasps using a four-fingered dextrous hand. Our system does not rely on prior models of the objects to be grasped; it obtains all the information it needs from vision and from tactile sensors located at the fingertips of the hand. The grasp planner is based on a genetic algorithm modified to allow the use of real numbers as the basic representation unit. The grasp executer is based on differential visual feedback, which allows the system to specify goals and monitor progress in image space without needing absolute calibration between the camera and the hand. We present experimental results showing the application of the system to grasping unknown objects with the Utah/MIT hand. (AN)