Dr. Nicefellow
DrNicefellow
AI & ML interests
LLM and AGI. Sometimes, play with Diffusion models
Recent Activity
new activity
3 days ago
WestlakeNLP/DeepReviewer-14B:Could you provide a GGUF version?
published
a model
about 1 month ago
DrNicefellow/sudoku-small-10k
updated
a model
2 months ago
DrNicefellow/uncensored_gpt_oss
Organizations
Qwen-QwQ-32B-Preview-abliterated-exl2
Qwen-QwQ-32B-Preview-exl2
Qwen2.5-Coder-14B-Instruct-exl2
Qwen2.5-32B-Instruct-exl2
Qwen2.5-Coder-7B-Instruct
Qwen2.5-7B-Instruct-exl2
ChatAllInOne
-
DrNicefellow/CHAT-ALL-IN-ONE-v1
Viewer • Updated • 1.24M • 283 • 6 -
DrNicefellow/ChatAllInOne-Yi-34B-200K-V1
Text Generation • 34B • Updated • 71 • 8 -
DrNicefellow/ChatAllInOne-Mistral-7B-V1
Text Generation • 7B • Updated • 30 • 1 -
DrNicefellow/ChatAllInOne_Mixtral-8x7B-v1
Text Generation • Updated • 64
Trimmed-Mixtral-instruct
Microscopic-Mistral
Microscopic-Mamba-2.1B
-
DrNicefellow/microscopic-mamba-2.1B-hf-1.0ksteps
Text Generation • Updated • 15 -
DrNicefellow/microscopic-mamba-2.1B-hf-7.8ksteps
Text Generation • Updated • 10 -
DrNicefellow/microscopic-mamba-2.1B-hf-4.9ksteps
Text Generation • Updated • 7 -
DrNicefellow/microscopic-mamba-2.1B-hf-13.4ksteps
Text Generation • Updated • 8
Qwen-1.5-Exl2
WorthLooking
Datasets-For-Finetuning
GPT-2-Large-From-Scratch
Qwen2.5-7B-O1-Journey-1-exl2
Qwen2.5-Coder-32B-Instruct-exl2
Qwen2.5-14B-Instruct-exl2
Qwen2.5-Math-7B-Instruct
Dr. Nicefellow's Quality Worryfree Datasets
ChatAllInOne-Quantized
Extracted_Models_From_Mixtral_8x7B
-
DrNicefellow/Mistral-1-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 26 • • 1 -
DrNicefellow/Mistral-2-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 15 -
DrNicefellow/Mistral-3-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 13 -
DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 26 •
Microscopic-Olmo
-
DrNicefellow/Microscopic-Olmo-2B-1.1k-steps
Text Generation • Updated • 13 -
DrNicefellow/Microscopic-Olmo-2B-3.9k-steps
Text Generation • Updated • 14 -
DrNicefellow/Microscopic-Olmo-2B-7.2k-steps
Text Generation • Updated • 12 -
DrNicefellow/Microscopic-Olmo-2B-11.8k-steps
Text Generation • Updated • 10
NanoGPTs
-
DrNicefellow/Nano-GPT2-500m-29k_steps-ChatAllInOne_step-5000
Text Generation • 0.5B • Updated • 12 -
DrNicefellow/Nano-GPT2-500m-29k_steps-ChatAllInOne_step-2500
Text Generation • 0.5B • Updated • 9 -
DrNicefellow/Nano-GPT2-500m-29k_steps
Text Generation • 0.5B • Updated • 10 -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 689k • • 1.99k
Mistral-Nemo-Instruct-2407-exl2
A friendly reminder: change the max_seq_len in text-generation-web-ui, otherwise, you get CUDA outta memory.
-
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-4bpw
Text Generation • Updated • 23 • 1 -
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-5bpw
Text Generation • Updated • 21 -
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-8bpw-h8
Text Generation • Updated • 12 • 7 -
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-4.5bpw
Text Generation • Updated • 13
Finetuned Models
Datasets-For-Finetuning
Qwen-QwQ-32B-Preview-abliterated-exl2
GPT-2-Large-From-Scratch
Qwen-QwQ-32B-Preview-exl2
Qwen2.5-7B-O1-Journey-1-exl2
Qwen2.5-Coder-14B-Instruct-exl2
Qwen2.5-Coder-32B-Instruct-exl2
Qwen2.5-32B-Instruct-exl2
Qwen2.5-14B-Instruct-exl2
Qwen2.5-Coder-7B-Instruct
Qwen2.5-Math-7B-Instruct
Qwen2.5-7B-Instruct-exl2
Dr. Nicefellow's Quality Worryfree Datasets
ChatAllInOne
-
DrNicefellow/CHAT-ALL-IN-ONE-v1
Viewer • Updated • 1.24M • 283 • 6 -
DrNicefellow/ChatAllInOne-Yi-34B-200K-V1
Text Generation • 34B • Updated • 71 • 8 -
DrNicefellow/ChatAllInOne-Mistral-7B-V1
Text Generation • 7B • Updated • 30 • 1 -
DrNicefellow/ChatAllInOne_Mixtral-8x7B-v1
Text Generation • Updated • 64
ChatAllInOne-Quantized
Trimmed-Mixtral-instruct
Extracted_Models_From_Mixtral_8x7B
-
DrNicefellow/Mistral-1-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 26 • • 1 -
DrNicefellow/Mistral-2-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 15 -
DrNicefellow/Mistral-3-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 13 -
DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1
Text Generation • 7B • Updated • 26 •
Microscopic-Mistral
Microscopic-Olmo
-
DrNicefellow/Microscopic-Olmo-2B-1.1k-steps
Text Generation • Updated • 13 -
DrNicefellow/Microscopic-Olmo-2B-3.9k-steps
Text Generation • Updated • 14 -
DrNicefellow/Microscopic-Olmo-2B-7.2k-steps
Text Generation • Updated • 12 -
DrNicefellow/Microscopic-Olmo-2B-11.8k-steps
Text Generation • Updated • 10
Microscopic-Mamba-2.1B
-
DrNicefellow/microscopic-mamba-2.1B-hf-1.0ksteps
Text Generation • Updated • 15 -
DrNicefellow/microscopic-mamba-2.1B-hf-7.8ksteps
Text Generation • Updated • 10 -
DrNicefellow/microscopic-mamba-2.1B-hf-4.9ksteps
Text Generation • Updated • 7 -
DrNicefellow/microscopic-mamba-2.1B-hf-13.4ksteps
Text Generation • Updated • 8
NanoGPTs
-
DrNicefellow/Nano-GPT2-500m-29k_steps-ChatAllInOne_step-5000
Text Generation • 0.5B • Updated • 12 -
DrNicefellow/Nano-GPT2-500m-29k_steps-ChatAllInOne_step-2500
Text Generation • 0.5B • Updated • 9 -
DrNicefellow/Nano-GPT2-500m-29k_steps
Text Generation • 0.5B • Updated • 10 -
meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 689k • • 1.99k
Qwen-1.5-Exl2
Mistral-Nemo-Instruct-2407-exl2
A friendly reminder: change the max_seq_len in text-generation-web-ui, otherwise, you get CUDA outta memory.
-
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-4bpw
Text Generation • Updated • 23 • 1 -
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-5bpw
Text Generation • Updated • 21 -
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-8bpw-h8
Text Generation • Updated • 12 • 7 -
DrNicefellow/Mistral-Nemo-Instruct-2407-exl2-4.5bpw
Text Generation • Updated • 13
WorthLooking