Management Beyond the Edge
The proposed multi-objective optimal orchestration plays the role of an engine, enabling all stakeholders to achieve their goals and reduce carbon emissions; thus, we will demonstrate one use case, context-aware distributed learning, to validate the proposed multi-objective optimal orchestration.
The proposed distributed learning will minimize carbon emission while optimizing the resource cost and machine learning's quality; furthermore, it utilizes a context-aware method to mitigate the performance degradation resulting from heterogenous datasets supplied by the data providers. Further, context awareness in distilling knowledge from data is critical in maintaining quality machine learning and resource efficiency.
For instance, our previous work demonstrated that the performance gains using a quality dataset were up to 13% in the maximum achievable accuracy; moreover, the resource consumption could be up to 73% less. Such a result indicates that resources saving and performance gains highly depend on the dataset's quality. Therefore, this work will enable context awareness in distributed learning to mitigate dependence on the dataset.
Much research has focused on reducing energy and carbon emissions in centralized training in the data centers; however, the energy consumption and carbon footprint related to distributed learning still need to be investigated. Thus, we will examine the carbon footprint of distributed learning by comparing it to traditional centralized learning and analyzing vastly distributed learning methods (e.g., knowledge distillation, meta-learning, and transfer learning), resulting in the carbon-efficient distributed learning method.
The multi-objective optimal orchestration is a core algorithm to run MLops, it will deal with various performance and ML model deployment issues. Therefore, we will propose and use various test scenarios with a different combination of distributed learning algorithms and datasets to prove its feasibility.
In conclusion, the Berzelius resource is highly valuable to carry out such a heuristic approach demanding a high workload to implement reference distributed learning algorithms and process datasets for comprehensive tests in various complicated test scenarios.