Privacy-Preserving Image Acquisition for Neural Vision Systems

dc.contributor.authorSepehri, Yamin
dc.contributor.authorPad, Pedram
dc.contributor.authorKündig, Clément
dc.contributor.authorFrossard, Pascal
dc.contributor.authorDunbar, L. Andrea
dc.descriptionThis article is accepted to IEEE Transactions on Multimedia, doi: 10.1109/TMM.2022.3207018, © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.description.abstractPreserving privacy is a growing concern in our society where cameras are ubiquitous. In this work, we propose a trainable image acquisition method that removes the sensitive information in the optical domain before it reaches the image sensor. The method benefits from a trainable optical convolution kernel, which transmits the desired information whilst filtering out the sensitive information, making it irretrievable against different privacy attacks in the digital domain. This is in contrast with the current digital privacy-preserving methods that are all vulnerable to direct access attacks. Also, in contrast with most of the previous optical privacy-preserving methods that cannot be trained, our method is data-driven and optimized for the specific application at hand. Moreover, there is no additional computation or power burden on the acquisition system since it works passively in the optical domain and can be even used in conjunction with other privacy-preserving techniques in the digital domain. We demonstrate our new, generic method in several scenarios such as smile or open-mouth detection as the desired attribute while the gender or wearing make-up is filtered out as the sensitive content. Through several experiments, we show that this method is able to reduce around 65% of sensitive content while causing a negligible reduction in the desired information. Moreover, we tested our method by deep reconstruction attack and confirmed the ineffectiveness of this attack to reconstruct the original sensitive content. This new method has different use cases such as feedback systems for smart TV content or outdoor advertising.
dc.identifier.citationIEEE Transactions on Multimedia, vol. 25, pp. 6232-6244, 2023
dc.relation.ispartofIEEE Transactions on Multimedia
dc.titlePrivacy-Preserving Image Acquisition for Neural Vision Systems
dc.typeJournal Article
dc.type.csemresearchareasData & AI
dc.type.csemresearchareasIoT & Vision
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
10.9 MB
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
2.82 KB
Item-specific license agreed upon to submission