ReTaT: A unified benchmark for relation extraction across text and table

Ettaleb, Mohamed; Ehrhart, Thibault; Aussenac-Gilles, Nathalie; Chabot, Yoan; Kamel, Mouna; Moriceau, Véronique; Troncy, Raphaël; Wei, Fanfu
LREC 2026, International Conference on Language Resources and Evaluation, 11-16 May 2026, Palma, Mallorca, Spain

While prior work in Information Extraction (IE) has focused on extracting information from either textual content or tables in isolation, they miss critical information that emerges only from their interplay. Indeed, tables may summarize facts that are sparse in the text, while text can disambiguate or elaborate on table entries. This complementarity may take the form of relations which are expressed across text and tables. In this context, we are interested in extracting such relations whose expression spans the two modalities. We propose this original task for which no reference evaluation corpora exists. Thus, we created ReTaT, a dataset that can be used to train and evaluate systems for extracting such relations. This dataset is composed of (table, surrounding text) pairs extracted from Wikipedia pages and has been manually annotated with relation triples. ReTaT is organized in three subsets with distinct characteristics: domain (business, telecommunication and female celebrities), size (from 50 to 255 pairs), language
(English vs French), type of relations (data vs object properties), close vs open list of relation, size of the surrounding text (paragraph vs full page). We then assessed its quality and suitability for the joint table-text relation extraction task using Large Language Models (LLMs), at a time when LLMs have demonstrated their ability to extract relations
from either text or tables in isolation.

Type:
Conference
City:
Palma
Date:
2026-05-11
Department:
Data Science
Eurecom Ref:
8703

PERMALINK : https://www.eurecom.fr/publication/8703