Private and sensitive information is often revealed in posts appearing in Social Networks (SN). This is due to the users’ willingness to increase their interactions within specific social groups, but also to a poor knowledge about the risks for privacy. We argue that technologies able to evaluate the sensitiveness of information while it is being published could enhance privacy protection by warning the user about the risks deriving from the disclosure of a certain information. To this aim, we propose a method, and an accompanying tool, to automatically intercept the sensitive information which is delivered in a social network post, through the exploitation of recurrent natural language patterns that are often used by users to disclose private data. A comparison with several machine learning techniques reveals that our method outperforms them, since it is more precise, accurate and not dependent on (i) a specific training set, or (ii) the selection of particular features.Proc. of 13th International ARES Conference on Availability, Reliability and Security (ARES 2018) - Hamburg, Germany, August 27-30, 2018