Retrieval Augmented Generation faces a trade-off: concatenating documents in a long prompt enables multi-document reasoning but creates prefill bottlenecks, while encoding document KV caches separately offers speed but breaks cross-document interaction. We propose Parallel Context-of-Experts Decoding (Pced), a training-free framework that shifts evidence aggregation from the attention mechanism to the decoding. Pced treats retrieved documents as isolated "experts", synchronizing their predictions via a novel retrieval-aware contrastive decoding rule that weighs expert logits against the model prior. This approach recovers cross-document reasoning capabilities without constructing a shared attention across documents.
Parallel context-of-experts decoding for retrieval augmented generation
Submitted to ArXiV, 13 January 2026
Type:
Report
Date:
2026-01-13
Department:
Data Science
Eurecom Ref:
8572
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 13 January 2026 and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/8572