We explore uncertainty quantification in large language models (LLMs), with the goal to identify when uncertainty in
Hacker News 1:15 pm on June 10, 2024
- The paper explores uncertainty quantification in large language models (LLMs) to identify when a model's output is unreliable, with an info-theoretic metric. It differentiates between epistemic and aleatoric uncertainties and demonstrates detection of hallucinations through iterative prompting.
- Authors: Yasin Abbasi Yadkor et al.
- Publication type: arXiv preprint
- References & Citations: Acknowledgements, DataCite DOI, Semantic Scholar entries
The categories suitable for this text are:
Artificial Intelligence
Search
https://arxiv.org/abs/2406.02543
< Previous Story - Next Story >