Interpretability is acknowledged as one of the most appreciated advantages of fuzzy sys- tems in many applications, especially in those with high human interaction where it actu- ally becomes a strong requirement. However, it is important to remark that there is a somehow misleading but widely extended belief, even in part of the fuzzy community, regarding fuzzy systems as interpretable no matter how they were designed. Of course, we are aware the use of fuzzy logic favors the interpretability of designed models. Thanks to their semantic expressivity, close to natural language, fuzzy variables and rules can be used to formalize linguistic propositions which are likely to be easily understandood by human beings. Obviously, this fact makes easier the knowledge extraction and representa- tion tasks carried out when modeling real-world complex systems. Notwithstanding, fuzzy logic is not enough by itself to guarantee the interpretability of the final model. As it is thoroughly illustrated in this special issue, achieving interpretable fuzzy systems is amat- ter of careful design because fuzzy systems cannot be deemed as interpretable per se. Thus, several constraints have to be imposed along the whole design process with the aim of pro- ducing really interpretable fuzzy systems, in the sense that every element of the whole sys- tem may be checked and understood by a human being. Otherwise, fuzzy systems may even become black-boxes.