Compared to other electroencephalogram (EEG) modalities, motor imagery (MI) based brain-computer interfaces (BCIs) can provide more natural and intuitive communication between human intentions and external machines. However, this type of BCI depends heavily on effective signal processing to discriminate EEG patterns corresponding to various MI tasks, especially feature extraction procedures. In this study, a comparison of different feature extraction methods was conducted for EEG classification of imaginary movements within the same upper extremity. Unlike traditional MI tasks (left/right hand), six imaginary movements from the same unilateral upper extremity were proposed and evaluated, including elbow flexion/extension, forearm supination/pronation, and hand grasp/open. To tackle the classification challenge of MI tasks within the same limb, four types of feature extraction methods were implemented and compared in combination with support vector machine (SVM) and linear discriminant analysis (LDA) classifiers, such as wavelet transformation, power spectrum, autoregressive model, common spatial patterns (CSP) and variants of filter-bank CSP (FBCSP), regularized CSP (RCSP). The overall accuracies of the CSP were significant higher than other three types of feature extraction on a dataset collected from 8 individuals, particularly the SVM with FBCSP had the best performance with an average accuracy of 71.78%. These decoding results of MI tasks during single upper extremity are encouraging and promising in the context of more natural MI-BCI for controlling assisted devices, such as a neuroprosthetic or robotic arm for motor disabled individuals with highly impaired upper extremity.