hodza commited on
Commit
a3f0ccc
·
verified ·
1 Parent(s): c9ffca0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -1,3 +1,61 @@
1
- ---
2
- license: bsd-2-clause
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-2-clause
3
+ datasets:
4
+ - hodza/Informatika21CP
5
+ language:
6
+ - ru
7
+ - en
8
+ base_model:
9
+ - Qwen/Qwen2.5-Coder-3B-Instruct
10
+ tags:
11
+ - code
12
+ - programming
13
+ - blackbox
14
+ - componentpascal
15
+ ---
16
+
17
+ # BlackBox Component Pascal Assistant Model
18
+
19
+ ![Model Logo](https://huggingface.co/front/assets/huggingface_logo-noborder.svg) <!-- Optional logo -->
20
+
21
+ ## Model Description
22
+
23
+ This is a specialized AI assistant for programming in **BlackBox Component Builder** using Component Pascal. The model is fine-tuned on Qwen/Qwen2.5-Coder-3B-Instruct to provide context-aware coding assistance and best practices for BlackBox development.
24
+
25
+ **Key Features:**
26
+ - Component Pascal syntax support
27
+ - BlackBox framework-specific patterns
28
+ - Code generation and troubleshooting
29
+ - Interactive programming guidance
30
+
31
+ ## Intended Use
32
+ ✅ Intended for:
33
+ - BlackBox Component Builder developers
34
+ - Component Pascal learners
35
+ - Legacy Oberon-2 system maintainers
36
+ - Educational purposes
37
+
38
+ 🚫 Not intended for:
39
+ - General programming outside BlackBox
40
+ - Non-technical decision making
41
+ - Mission-critical systems without human verification
42
+
43
+ ## How to Use
44
+
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+
48
+ model_name = "hodza/BlackBox-Coder-3B"
49
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
50
+ model = AutoModelForCausalLM.from_pretrained(model_name)
51
+
52
+ def get_assistant_response(prompt):
53
+ inputs = tokenizer(prompt, return_tensors="pt")
54
+ outputs = model.generate(
55
+ inputs.input_ids,
56
+ max_new_tokens=256,
57
+ temperature=0.7,
58
+ top_p=0.9,
59
+ pad_token_id=tokenizer.eos_token_id
60
+ )
61
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)