igorktech commited on
Commit
bddf001
·
verified ·
1 Parent(s): 9c5c370

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -12,13 +12,54 @@ model-index:
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # RuBit-Llama-56M2
16
 
17
- This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the darulm dataset.
 
 
 
 
 
18
 
19
  ## Model description
20
 
21
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  ## Intended uses & limitations
24
 
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # RuBit-Llama-63M
16
 
17
+ This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the darulm dataset.
18
+ From darulm aphorisms, dramaturgy, history, humor, literature domains were sampled
19
+
20
+ Training on 2_125_871_104 tokens.
21
+
22
+ Inspired by [abideen/Bitnet-Llama-70M](https://huggingface.co/abideen/Bitnet-Llama-70M)
23
 
24
  ## Model description
25
 
26
+ # Sample inference code
27
+
28
+ ```python
29
+ from transformers import AutoModelForCausalLM, AutoTokenizer
30
+
31
+ # Load a pretrained BitNet model
32
+ model = "igorktech/RuBit-LLama-63M"
33
+ tokenizer = AutoTokenizer.from_pretrained(model)
34
+ model = AutoModelForCausalLM.from_pretrained(model)
35
+
36
+ def convert_to_bitnet(model, copy_weights):
37
+ for name, module in model.named_modules():
38
+ # Replace linear layers with BitNet
39
+ if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
40
+ for child_name, child_module in module.named_children():
41
+ if isinstance(child_module, nn.Linear):
42
+ bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
43
+ if copy_weights:
44
+ bitlinear.weight = child_module.weight
45
+ if child_module.bias is not None:
46
+ bitlinear.bias = child_module.bias
47
+ setattr(module, child_name, bitlinear)
48
+ # Remove redundant input_layernorms
49
+ elif isinstance(module, LlamaDecoderLayer):
50
+ for child_name, child_module in module.named_children():
51
+ if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
52
+ setattr(module, child_name, nn.Identity().to(device="cuda:0"))
53
+
54
+
55
+ convert_to_bitnet(model, copy_weights=True)
56
+ model.to(device="cuda:0")
57
+
58
+ prompt = "Привет"
59
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
60
+ generate_ids = model.generate(inputs.input_ids, max_length=100)
61
+ tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
62
+ ```
63
 
64
  ## Intended uses & limitations
65