Think2SQL: Blueprinting reward density and advantage scaling for effective text-to-SQL reasoning

Papicchio, Simone; Rossi, Simone; Cagliero, Luca; Papotti, Paolo
Submitted to Transactions on Machine Learning Research (TMLR), February 2026

While Large Language Models (LLMs) have advanced the state-of-the-art in Text-to-SQL, robust reasoning in complex, multi-table environments remains a bottleneck for parameter-efficient models. This paper presents a systematic empirical study on injecting reasoning capabilities into Text-to-SQL through the lens of Reinforcement Learning with Verifiable Rewards (RLVR). We uncover a critical interplay between reward density, advantage scaling, and model capacity. Our analysis yields four primary insights. First, we propose a novel execution-guided dense reward function that significantly outperforms binary signals and existing state-of-the-art rewards by providing granular feedback at the instance level. Second, we analyze the mechanics of advantage calculation, demonstrating that while large models thrive on sparse signals with aggressive advantage scaling, smaller models require dense rewards and conservative scaling to improve Text-to-SQL performance. Third, we evaluate the impact of cold start showing that distillation does not always benefit RLVR
performance, and supervised fine-tuned models are prone to distributional mimicry. Fourth, we map the Pareto frontier of training efficiency, providing insights for optimizing Text-to-SQL reasoning under computational constraints. Our findings culminate in the Think2SQL family: our 4B-parameter model demonstrates reasoning capabilities competitive with state-of-the-art models such as o3. We release our models, datasets, and code to create a blueprint for RLVR optimization in Text-to-SQL at https://anonymous.4open.science/r/Think2SQL-3B7F.

HAL
Type:
Journal
Date:
2026-02-23
Department:
Data Science
Eurecom Ref:
8640
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to Transactions on Machine Learning Research (TMLR), February 2026 and is available at :

PERMALINK : https://www.eurecom.fr/publication/8640