hallucination
updated
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper
•
2309.11495
•
Published
•
39
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language
Model Training
Paper
•
2410.15460
•
Published
•
1
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate
Hallucinations
Paper
•
2410.18860
•
Published
•
11
Do I Know This Entity? Knowledge Awareness and Hallucinations in
Language Models
Paper
•
2411.14257
•
Published
•
14
Linear Correlation in LM's Compositional Generalization and
Hallucination
Paper
•
2502.04520
•
Published
•
10
The Hidden Life of Tokens: Reducing Hallucination of Large
Vision-Language Models via Visual Information Steering
Paper
•
2502.03628
•
Published
•
12
When an LLM is apprehensive about its answers -- and when its
uncertainty is justified
Paper
•
2503.01688
•
Published
•
21
How to Steer LLM Latents for Hallucination Detection?
Paper
•
2503.01917
•
Published
•
11
LettuceDetect: A Hallucination Detection Framework for RAG Applications
Paper
•
2502.17125
•
Published
•
12
Are Reasoning Models More Prone to Hallucination?
Paper
•
2505.23646
•
Published
•
24
Why Language Models Hallucinate
Paper
•
2509.04664
•
Published
•
195
Large Language Models Do NOT Really Know What They Don't Know
Paper
•
2510.09033
•
Published
•
16