Work

Learning under Adversarial Resilience

Public

In this thesis we study two problems, one in unsupervised learning - k-means clustering and the other in a supervised learning setting with the presence of adversarial perturbations. We do a beyond-worst case style analysis and show that in either case instances that are resilient to adversarial perturbations are also tractable. Resilience to adversarial perturbations imply margin conditions of different flavors. For the k-means problem, the resilience comes in the form of an assumption related to Bilu-Linial stability. We give an efficient algorithm for the k-means problem under the assumption that perturbing each point slightly does not alter the optimal clustering. This assumption implies that there exists an angular margin between any two clusters in an optimal clustering. In the adversarial learning problem we assume realizability, i.e. there exists a perfect classifier that is resilient to adversarial perturbations. This in turn implies that this perfect classifier has a margin and we show that under this assumption we can achieve adversarial robustness. We also show hardness of approximation in the adversarial learning setting. One of the perhaps surprising observation of this thesis is the role of the adversarial resilience assumption in making NP-hard problems tractable in both the supervised and the unsupervised setting.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items