In this paper, we propose a novel approach to conform al prediction for generative language models (LMs). Standard conform al prediction produces prediction sets—in place of single predictions—that have rigorous, statistical performance guarantees. LM responses are typically sampled from the model’s predicted distribution over the large, combinatorial output space of natural language. Translating this process to conform al prediction, we calibrate a stopping rule for sampling different outputs from the LM that get added to a growing set of candidates until we are confident that the output set is sufficient. Since some samples maybe low quality, we also simultaneously calibrate and apply a rejection rule for removing candidates from the output set to reduce noise. Similar to conformal prediction, we prove that the sampled set returned by our procedure contains at least one acceptable answer with high probability, while still being empirically precise (i.e.,small) on average. Furthermore, within this set of candidate responses, we show that we can also accurately identify subsets individual components—such as phrases or sentences—that are each independently correct(e.g.,thatarenot“hallucinations”), again with statistical guarantees. We demonstrate the promise of our approach on multiple tasks in open-domain question answering, text summarization, and radiology report generation using different LM variants.
Contributors: Victor Quach, Adam Fisch, Tal Schuster, Jae Ho Sohn