Since a robot with ears may be deployed to various auditory environments, the robot audition system should provide an easy way to adapt to them. HARK provides a set of modules to cope with various auditory environments by using an open-sourced middleware, FlowDesigner, and reduces the overheads of data transfer between modules.

HARK has been open-sourced since April 2008. The resulting implementation of HARK with MUSIC-based sound source localization, GHDSS-based sound source separation and Missing-Feature-Theory-based
automatic speech recognition (ASR) on several robots like HRP-2, SIG, SIG2, and Robovie R2 attains recognizing three simultaneous utterances in real time.

HARK consists of a lot of modules for robot audition. These modules are implemented as a module for FlowDesigner and some modules are based on ManyEars. ManyEars provides microphone array processing to perform sound source localization, tracking, and separation.