DAIDESS
Title: DAIDESS
DNr: Berzelius-2023-359
Project Type: LiU Berzelius
Principal Investigator: Mikael Nilsson <mikael.nilsson@math.lth.se>
Affiliation: Lunds universitet
Duration: 2023-12-14 – 2024-07-01
Classification: 10207
Homepage: https://www.vinnova.se/en/p/daidess---decomposable-ai-deployments-made-efficient-and-sustainable-by-specialization/
Keywords:

Abstract

Artificial Intelligence (AI) and Machine Learning (ML) have rapidly transitioned from academic research to practical applications. The research primarily focuses on high-accuracy solutions for general problems, often far from efficient for specific use cases with limited hardware. Various accelerators exist, both in the cloud and on edge devices such as cameras and user devices, but their availability varies with global demand in the cloud and activated cameras. This makes effective partitioning for AI applications challenging. Existing tools are often unsuitable for edge distribution. Edge AI differs from cloud-based AI in terms of code, data, and model considerations. This lack of productivity-enhancing technology leads to underutilization of edge devices in ML deployments, resulting in inefficient cloud-based video processing. Our project aims to improve the usability of algorithms and platforms for computer vision by focusing on three aspects: research on generic, decomposable algorithms for computer vision, platform development to deploy these algorithms, and addressing a specific use case in automated sports production. To ensure broad applicability, we consider variations in use cases and industries. However, within the project, we will address a specific use case related to streaming sports for pilots and tests. The project idea is to make deep learning solutions deployable in an economically viable and environmentally friendly manner by: 1. Building general enabling technologies and methodologies for easily producing decomposable algorithms. 2. Building a platform that can dynamically partition and schedule algorithms onto the currently available efficient deep learning processors or accelerators. 3. Specialize the general algorithms to the specific application/scene/… of interest using known preconditions (for example camera calibration) and network pruning or distillation based on application specific datasets with a mix of real and synthetic data.