LoRA: Low-Rank Adaptation of Large Language Models
Paper
•
2106.09685
•
Published
•
57
This model is a fine-tuned version of google/gemma-7b on the english quote dataset using LoRA. It is based on the example provided by google here. The notebook used to fine-tune the model can be found here
The model can complete popular quotes given to it and add the author of the quote. For example, Given the qoute below:
Quote: With great power comes
The model would complete the quote and add the author of the quote:
Quote: With great power comes great responsibility. Author: Ben Parker.
Given a complete Quoute the model would add the author:
Quote: I'll be back. Author: Arnold Schwarzenegger.
The model can be used with transformers library. Here's an example of loading the model in 4 bit quantization mode:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "Eteims/gemma_ft_quote"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="cuda:0")
This code would easily run in a free colab tier.
After loading the model you can use it for inference:
text = "Quote: Elementary, my dear watson."
device = "cuda:0"
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The following hyperparameters were used during fine-tuning:
Base model
google/gemma-7b