Differentially Private Learning of Distributed Deep Fuzzy Models
This study introduces a privacy-preserving framework to fuzzy machine learning. Assuming training data as private, the problem of learning of a deep fuzzy model is considered under differential privacy framework. The deep fuzzy model, formed by a composition of a finite number of Takagi-Sugeno type fuzzy filters, is learned using variational Bayesian inference. The training data is made private via adding random noise. This study suggests an $(\epsilon,\delta)-$differentially private noise adding mechanism that results in multi-fold reduction in noise magnitude over the classical Gaussian mechanism and thus leads to an increased utility for a given level of privacy. Further, the robustness feature, offered by the stochastic fuzzy deep model, is leveraged to alleviate the effect of added data noise on the utility. An architecture for distributed form of differentially private learning is suggested where a privacy wall separates the private local training data from the globally shared data, and fuzzy sets and fuzzy rules are used to aggregate robustly the local deep fuzzy models for building the global model. The privacy wall uses noise adding mechanisms to attain differential privacy for each participant's private training data and thus the adversaries have no direct access to the training data. The fuzzy based approach of this study learns an efficient data representation as verified by the experiments made on benchmark datasets.
- M. Kumar, M. Rossbory, B. A. Moser, and B. Freudenthaler, "Differentially Private Learning of Distributed Deep Fuzzy Models," IEEE Transactions on Fuzzy Systems, under-review.