kgreenewald commited on
Commit
35d0b55
·
verified ·
1 Parent(s): f480202

Update uncertainty/README.md

Browse files
Files changed (1) hide show
  1. uncertainty/README.md +11 -4
uncertainty/README.md CHANGED
@@ -48,7 +48,7 @@ it has been evaluated on in training - this is an inherently inexact process and
48
  **Possible downstream use cases**
49
  * Human usage: Certainty scores give human users an indication of when to trust answers from the model (which should be augmented by their own knowledge).
50
  * Model routing/guards: If the model has low certainty (below a chosen threshold), it may be worth sending the request to a larger, more capable model or simply choosing not to show the response to the user.
51
- * RAG: **Granite Uncertainty 3.3 8b** is calibrated on document-based question answering datasets, hence it can be applied to giving certainty scores for answers created using RAG. This certainty will be a prediction of overall correctness based on both the documents given and the model's own knowledge (e.g. if the model is correct but the answer is not in the documents, the certainty can still be high).
52
 
53
  **Important note** Certainty is inherently an intrinsic property of a model and its abilitities. These modesl are not intended to predict the certainty of responses generated by any other models besides their corresponding base model.
54
  Additionally, certainty scores are *distributional* quantities, and so will do well on realistic questions in aggregate, but in principle may have surprising scores on individual
@@ -78,10 +78,17 @@ Scenario 2. Predicting the certainty score from the question only, *prior* to ge
78
  ## Training and Evaluation
79
  These models are adapters tuned to provide certainty scores mimicking the output of a calibrator trained via the method in [[Shen et al. ICML 2024] Thermometer: Towards Universal Calibration for Large Language Models](https://arxiv.org/abs/2403.08819).
80
 
81
- Evaluation: The mean absolute error for the adapters in predicting the output of the calibrator are as follows.
82
 
83
- **Lora adapters**
84
- *
 
 
 
 
 
 
 
85
 
86
  ### Training Data
87
  The following datasets were used for calibration and/or finetuning.
 
48
  **Possible downstream use cases**
49
  * Human usage: Certainty scores give human users an indication of when to trust answers from the model (which should be augmented by their own knowledge).
50
  * Model routing/guards: If the model has low certainty (below a chosen threshold), it may be worth sending the request to a larger, more capable model or simply choosing not to show the response to the user.
51
+ * RAG: These models are calibrated on document-based question answering datasets, hence it can be applied to giving certainty scores for answers created using RAG. This certainty will be a prediction of overall correctness based on both the documents given and the model's own knowledge (e.g. if the model is correct but the answer is not in the documents, the certainty can still be high).
52
 
53
  **Important note** Certainty is inherently an intrinsic property of a model and its abilitities. These modesl are not intended to predict the certainty of responses generated by any other models besides their corresponding base model.
54
  Additionally, certainty scores are *distributional* quantities, and so will do well on realistic questions in aggregate, but in principle may have surprising scores on individual
 
78
  ## Training and Evaluation
79
  These models are adapters tuned to provide certainty scores mimicking the output of a calibrator trained via the method in [[Shen et al. ICML 2024] Thermometer: Towards Universal Calibration for Large Language Models](https://arxiv.org/abs/2403.08819).
80
 
81
+ Evaluation: The mean absolute error (MAE) for the adapters in predicting the output of the calibrator are as follows. Here, "X% MAE" means the error in the original output units of % chance (not a further relative error). Recall that the output is quantized to steps of 10%, so errors less than that are on average less than a quantization level.
82
 
83
+ **aLoRA adapters**
84
+ * Granite 3.3 2B: 4.45% MAE
85
+ * Granite 3.3 8B: 3.45% MAE
86
+ * GPT-OSS 20B: 0.75% MAE
87
+
88
+ **LoRA adapters**
89
+ * Granite 3.3 2B: 5.25% MAE
90
+ * Granite 3.3 8B: 5.10% MAE
91
+ * GPT-OSS 20B: 1.35% MAE
92
 
93
  ### Training Data
94
  The following datasets were used for calibration and/or finetuning.