Recent breakthroughs in artificial intelligence have produced Large Language Models (LLMs) and a new wave of Tabular Foundation Models (TFMs). Both promise to redefine how we query, integrate, and reason over relational data, yet they embody opposing philosophies: LLMs pursue broad generality through massive text-centric pre-training, whereas TFMs embed inductive biases that mirror table structure and relational semantics. This panel assembles researchers and practitioners from academia and industry to debate which path, specialized TFMs, ever stronger general-purpose LLMs, or a hybrid of the two, will most effectively power the next generation of data management systems. Panelists will confront questions of generality, accuracy, scalability, robustness, cost, and usability across core data management tasks such as Text-to-SQL translation, schema understanding, and entity resolution. The discussion aims to surface critical research challenges and guide the community's investment of effort and resources over the coming years.
Panel on neural relational data: Tabular foundation models, LLMs... or both?
VLDB 2025, 51st International Conference on Very Large Data Bases, 1-5 September 2025, London, United Kingdom / Also published in Proceedings of the VLDB Endowment, Vol. 18, N°12, 16 September 2025
Type:
Conférence
City:
London
Date:
2025-09-01
Department:
Data Science
Eurecom Ref:
8365
Copyright:
© ACM, 2025. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in VLDB 2025, 51st International Conference on Very Large Data Bases, 1-5 September 2025, London, United Kingdom / Also published in Proceedings of the VLDB Endowment, Vol. 18, N°12, 16 September 2025 https://doi.org/10.14778/3750601.3760519
See also:
PERMALINK : https://www.eurecom.fr/publication/8365