Picture: Dr Konrad Wieland and the scientific partners at MODELS 2024 in Linz
The presentation at MODELS 2025 will examine how large language models (LLMs; a form of artificial intelligence) can improve model versioning workflows with LemonTree by supporting conflict detection and resolution.
MODELS, the ACM/IEEE International Conference on Model-Driven Engineering Languages and Systems, is the leading conference series for model-driven software and systems engineering and will take place from 5 to 10 October 2025 in Grand Rapids (MI/USA). This year, the lecture series ‘New Ideas and Emerging Results’ (NIER) was introduced for the first time, and the lecture ‘Towards LLM-based conflict detection and resolution in model versioning’ (LLM = Large Language Model) by LieberLieber and Johannes Kepler University Linz (JKU) was accepted.
Novel ideas and innovative approaches to modelling
Since 1998, MODELS has covered all aspects of modelling, from languages and methods to tools and applications. The new lecture series ‘New Ideas and Emerging Results’ (NIER) offers a special forum for visionary, thought-provoking and future-oriented research in the field of model-driven engineering (MDE). The new track aims to present research, novel ideas and innovative approaches that have the potential to shape the future of the field. Submissions that explore bold hypotheses, unconventional methods and interdisciplinary perspectives and stimulate discussions that challenge the status quo and open up new avenues for research are welcome.
Submission by LieberLieber and JKU selected
As the selection rate for presentations at MODELS is in the single digits, LieberLieber is very proud to be included in the new lecture series at the very first opportunity. The submitting team consists of Martin Eisenberg, Stefan Klikovits and Manuel Wimmer (all from Johannes Kepler University Linz) and ⧏BR/⧐Konrad Wieland, CEO of LieberLieber. The presentation is entitled ‘Towards LLM-based conflict detection and resolution in model versioning’ (LLM = Large Language Model).
AI can significantly improve conflict resolution
Over the past two decades, a number of workflows for model versioning have been proposed. Standard workflows are based on a three-way model merge, which makes it possible to evaluate potentially conflicting changes in simultaneously developed model versions. However, the conflicts that can be detected usually relate to the syntactic level of models, such as update, deletion or usage conflicts. In contrast, unintended semantic inconsistencies often go unnoticed because the detection mechanisms lack semantic awareness of the modelling language or the modelled domain. Resolving such conflicts remains a manual task.
Optimising model versioning workflows with AI
The joint presentation examines how large language models (LLMs) can improve model versioning workflows by supporting conflict detection and resolution. An LLM-based solution for detecting conflicts in three-way model merging is demonstrated. Using a collection of conflict types from the existing literature, we will present how an LLM assistant can
1) locate conflicting changes and
2) offer solution options with clear justifications and explanations of their implications.
The results show that LLMs’ access to a variety of domains and modelling languages can help find and resolve complex versioning conflicts. The implementation combines the LieberLieber tool LemonTree for analysing models and model changes with a GPT-4o (LLM) assistant equipped with relevant context to detect and resolve conflicts. Finally, the presentation discusses directions for future research on improving model versioning workflows using LLMs.