Distributed Active Learning for Support Vector Machines
||Distributed Active Learning for Support Vector Machines|
||SNIC Medium Compute|
||Alexander Schliep <email@example.com>|
||2020-12-01 – 2021-12-01|
The goal of this project is to research active learning for machine learning algorithms in a distributed manner for training large-scale problems. In particular, we study the effect of multi-agent active learning for the Support Vector Machines (SVMs) using adaptive communication in which agents may drop in or out of communication based on some policy. For instance, an agent can drop out of communication or even the active learning step if the accuracy of the local model is close enough to the target accuracy. The initial communication in the network between agents is designed in a decentralized manner in which no master or a central agent controls the communication and each agent communicates with the one-hop neighbors. This is adapted in a distributed network-based SVMs algorithm.
We will compare several active learning methods in terms of accuracy and CPU time. We will investigate a lower bound to the number of samples to be labeled to get good performance in which only a few agents communicate. We will conduct experiments to evaluate the effectiveness of the developed adaptive communication strategy and the proposed distributed multi-agent active learning on large-scale problems.