WASP- Private Model selection
Abstract: Deep neural networks (DNNs) are one of the most widely used machine learning algorithm. In the literature, not much focus has been given to the streaming nature of the data in real-world, where data comes continuously and may change its distribution frequently. DNNs must update themselves regularly in order to incorporate these changes in the data. With such concept drifts, privacy of each individual becomes a challenging task. There exists few privacy models such as k-anonymity, local differential privacy which learns under concept drift, either leaks data or causes significant utility loss. Recently proposed integral privacy model is robust against membership attacks and does not causes utility loss. In this paper, we focus on the notion of integrally private DNNs in under concept drifts. We introduce a methodology to recommend integrally private utility preserving DNN models. We use a data-centric approach to generate subsamples which has the same class-distribution as the original data. We have experimented with 6 datasets of varied sizes (10k to 7 million instances) and our experimental results showed that recommended private models achieve benchmark comparable utility. We have also compared our results with the industry standard local differential privacy to show the superiority of our methodology.