Privacy Filter to Disable Facial Recognition Developed by Researchers

While uploading a picture or a video to a social media platform, every time you give away a little more information about you to the facial recognition system. These calculations ingest information about your identity, your area and individuals you know, and they are continually making enhancements. As concerns over the security and privacy of the on social platforms increases, a team of U of T researchers, which is headed by Professor Parham Aarabi and Avishek Bose, a graduate student, have developed and algorithm to disturb facial recognition frameworks dynamically. Prof. Aarabi states that with facial recognition becoming much improved, personal privacy has become a cause of concern. “This is one method by which advantageous anti-facial-recognition frameworks can fight out that capability,” he added.

Their solution uses a thoughtful learning procedure, known as adversarial training, owing to which, two artificial intelligence (AI) algorithms are pitted against each other. Aarabi and Bose have designed an arrangement of two neural systems, among which, the first attempts at recognizing faces, and the second tries to disturb the facial recognition work of the first. Both of them are continually fighting and benefiting from each other, making a continuous AI arm battle. The outcome of this is an Instagram-like filter, which can be utilized on photographs to secure their privacy. The algorithms of these photos keeps on changing the pixels in the picture quite often, making improvements that are relatively subtle to the human eye.

In the broadest sense, and at the most fundamental level, it is our sincere hope and aim that Edition Truth shall become a site of discussion, debate, cross-cultural exchange, and mutual benefit for all those wishing to expand the depth and breadth of their knowledge.