Embedded AI systems are becoming increasingly complex to develop and maintain, requiring specialized workflows that span data processing, model conversion, optimization, and deployment across heterogeneous hardware platforms. Recently, large language models have emerged as a promising tool to automate parts of this lifecycle. In this talk, I present recent work investigating the use of generative AI models as orchestration agents for embedded machine learning pipelines. Using an automated system that leverages LLMs to generate and iteratively refine software artifacts for embedded platforms, we evaluate the feasibility of automating key stages of the AI lifecycle. Our empirical results reveal both the promise and the limitations of this approach. Generative models can significantly accelerate development workflows. However, they also introduce instability, iterative failure modes, and unpredictable operational costs. I will discuss the main failure patterns observed in practice and outline research directions aimed at improving reliability through hybrid reasoning frameworks and system-level feedback mechanisms.
Autopilots need parachutes: Reliability lessons from LLM-automated embedded AI systems
TILOS-SDSU Seminar, 25 March 2026, San Diego, USA
Type:
Talk
City:
San Diego
Date:
2026-03-25
Department:
Systèmes de Communication
Eurecom Ref:
8663
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in TILOS-SDSU Seminar, 25 March 2026, San Diego, USA and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/8663