Mr-FineTuner commited on
Commit
cb76ae2
·
verified ·
1 Parent(s): 677be29

Add model card with exact and within-1 confusion matrices and per-class metrics

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Fine-Tuned Mistral-7B CEFR Model
3
+
4
+ This is a fine-tuned version of `unsloth/mistral-7b-instruct-v0.3-bnb-4bit` for CEFR-level sentence generation, evaluated with a fine-tuned classifier from `Mr-FineTuner/Skripsi_validator_best_model`.
5
+
6
+ - **Base Model**: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
7
+ - **Fine-Tuning**: LoRA with SMOTE-balanced dataset
8
+ - **Training Details**:
9
+ - Dataset: CEFR-level sentences with SMOTE and undersampling for balance
10
+ - LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
11
+ - Training Args: learning_rate=2e-5, batch_size=8, epochs=0.01, cosine scheduler
12
+ - Optimizer: adamw_8bit
13
+ - Early Stopping: Patience=3, threshold=0.01
14
+ - **Evaluation Metrics (Exact Matches)**:
15
+ - CEFR Classifier Accuracy: 0.000
16
+ - Precision (Macro): 0.000
17
+ - Recall (Macro): 0.000
18
+ - F1-Score (Macro): 0.000
19
+ - **Evaluation Metrics (Within ±1 Level)**:
20
+ - CEFR Classifier Accuracy: 0.500
21
+ - Precision (Macro): 0.333
22
+ - Recall (Macro): 0.500
23
+ - F1-Score (Macro): 0.389
24
+ - **Other Metrics**:
25
+ - Perplexity: 6.089
26
+ - Diversity (Unique Sentences): 0.100
27
+ - Inference Time (ms): 5150.096
28
+ - Model Size (GB): 4.1
29
+ - Robustness (F1): 0.000
30
+ - **Confusion Matrix (Exact Matches)**:
31
+ - CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv)
32
+ - Image: [confusion_matrix_exact.png](confusion_matrix_exact.png)
33
+ - **Confusion Matrix (Within ±1 Level)**:
34
+ - CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv)
35
+ - Image: [confusion_matrix_within1.png](confusion_matrix_within1.png)
36
+ - **Per-Class Confusion Metrics (Exact Matches)**:
37
+ - A1: TP=0, FP=0, FN=10, TN=50
38
+ - A2: TP=0, FP=0, FN=10, TN=50
39
+ - B1: TP=0, FP=10, FN=10, TN=40
40
+ - B2: TP=0, FP=0, FN=10, TN=50
41
+ - C1: TP=0, FP=30, FN=10, TN=20
42
+ - C2: TP=0, FP=20, FN=10, TN=30
43
+ - **Per-Class Confusion Metrics (Within ±1 Level)**:
44
+ - A1: TP=0, FP=0, FN=10, TN=50
45
+ - A2: TP=0, FP=0, FN=10, TN=50
46
+ - B1: TP=0, FP=10, FN=10, TN=40
47
+ - B2: TP=10, FP=0, FN=0, TN=50
48
+ - C1: TP=10, FP=10, FN=0, TN=40
49
+ - C2: TP=10, FP=10, FN=0, TN=40
50
+ - **Usage**:
51
+ ```python
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer
53
+
54
+ model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test_01_withNewEval_andWithin-1_mistral_skripsi_classifier")
55
+ tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test_01_withNewEval_andWithin-1_mistral_skripsi_classifier")
56
+
57
+ # Example inference
58
+ prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
59
+ inputs = tokenizer(prompt, return_tensors="pt")
60
+ outputs = model.generate(**inputs, max_length=50)
61
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
62
+ ```
63
+
64
+ Uploaded using `huggingface_hub`.