This is a fine-tuning of distilbert-base-uncased trained on a small set of manually labeled sentences classified as "request" or "question". The main purpose is to calculate metrics used by the SCBN-RQTL chatbot response evaluation benchmark. More information in the GitHub repository here.

Downloads last month
10
Safetensors
Model size
67M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for reddgr/rq-request-question-prompt-classifier

Finetuned
(10382)
this model