Protecting Privacy using AI

Blurring Faces in the Background of Livestreams

With the ever-growing popularity of smartphones and the internet, more and more people record videos in public and even share live streams on social media. We protect the privacy of bystanders, who cannot consent to be recorded by automatically blurring their faces in real-time. We achieve this by employing AI, not to automatically collect personal data like it is done in autocratic regimes, but to help people to navigate public places anonymously. Our user base is anyone who shares videos such as interviews for TV channels like ARD or ZDF or social media platforms like Instagram.


We chose YOLOv5 as our single-shot detector for its state-of-the-art performance and nano version, to achieve real-time inference on the computationally restrained Jetson Nano. To reach a performance of 20 fps, we used the TensorRT framework to optimize our trained YOLOv5s model increasing the inference time by a factor of 3.16.

Detecting Background Faces: 

We pre-trained our model on the WIDERFACE[1] dataset, which led to a strong baseline for detecting different faces in diverse settings. For our custom dataset, we downloaded videos from YouTube and annotated 2.289 images with the two classes interview and background. Using transfer learning, we used our pre-trained model and fine-tuned it on our custom data, yielding a model that robustly detects faces and classifies them into background and foreground.

Model explainability:

 We used GradCamd to evaluate what our model has learned and can show that not only faces but also arms, hands, and even a microphone show high activation and thus are important to our model.


We successfully blur background faces in real time providing a smooth video stream and thus an enjoyable user experience whilst protecting privacy. We deploy our system inside a Docker Container for shipping our system and provide a Flask Web Application as a User Interface as a lightweight possibility to showcase our system. We also provide the possibility to expand our system by integrating object trackers to further increase the performance on systems with higher compute. A demonstration of our result can be viewed here: Demo.