Finetuning with unsloth: Ten Thousand Dreams#
Source for this notebook: https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing
!pip install unsloth tf-keras
Model#
We will use a 7B model from Mistral, which runs on most regular laptops. https://ollama.com/library/mistral
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
# model_name = "unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", # More models at https://huggingface.co/unsloth
# model_name = 'unsloth/mistral-7b-instruct-v0.3-bnb-4bit',
model_name = 'unsloth/mistral-7b-v0.3',
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
2025-05-05 08:26:00.066814: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1746433560.086723 536 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1746433560.092981 536 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1746433560.109510 536 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746433560.109526 536 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746433560.109528 536 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746433560.109530 536 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-05-05 08:26:00.117817: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))== Unsloth 2025.4.7: Fast Mistral patching. Transformers: 4.51.3.
\\ /| NVIDIA A100-SXM4-80GB MIG 2g.20gb. Num GPUs = 1. Max memory: 19.5 GB. Platform: Linux.
O^O/ \_/ \ Torch: 2.7.0+cu126. CUDA: 8.0. CUDA Toolkit: 12.6. Triton: 3.3.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.30. FA2 = False]
"-____-" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
model = FastLanguageModel.get_peft_model(
model,
r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
)
Unsloth 2025.4.7 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.
Data#
See the chapter Dataset: Ten Thousand Dreams
from datasets import load_dataset # https://huggingface.co/docs/datasets/loading
dataset = load_dataset('json', data_files='data/dreams.json', split='train')
dataset
Dataset({
features: ['conversations'],
num_rows: 3324
})
# Inspect the dataset
sample = dataset[0]
sample
{'conversations': [{'content': 'I was abandoned.', 'role': 'user'},
{'content': 'To dream that you are abandoned, denotes that you will have difficulty in framing your plans for future success.',
'role': 'assistant'}]}
sample['conversations']
[{'content': 'I was abandoned.', 'role': 'user'},
{'content': 'To dream that you are abandoned, denotes that you will have difficulty in framing your plans for future success.',
'role': 'assistant'}]
Format it with a fitting chat template#
https://docs.unsloth.ai/basics/chat-templates
To finetune a mistral model, we have to use the mistral chat template.
from unsloth.chat_templates import get_chat_template
tokenizer = get_chat_template(
tokenizer,
chat_template = "mistral",
)
# This function uses the data from a conversation and merges it
# into one text. Then this will be added to our dataset in the next step.
def formatting_prompts_func(examples):
convos = examples["conversations"]
texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos]
return { "text" : texts, }
pass
# Convert it
# This will add the column 'text' to the dataset.
dataset = dataset.map(formatting_prompts_func, batched = True,)
sample = dataset[0]
sample.keys()
dict_keys(['conversations', 'text'])
sample['text']
'<s>[INST] I was abandoned. [/INST]To dream that you are abandoned, denotes that you will have difficulty in framing your plans for future success.</s>'
This matches the template from mistral. See https://ollama.com/library/mistral/blobs/491dfa501e59
[INST] {{ if .System }}{{ .System }}
{{ end }}{{ .Prompt }}[/INST]
{{- end }} {{ .Response }}
{{- if .Response }}</s>
{{- end }}
Train the model#
Now let’s use Huggingface TRL’s SFTTrainer
! More docs here: TRL SFT docs. We do 60 steps to speed things up, but you can set num_train_epochs=1
for a full run, and turn off max_steps=None
. We also support TRL’s DPOTrainer
!
from trl import SFTTrainer
from transformers import TrainingArguments, DataCollatorForSeq2Seq
from unsloth import is_bfloat16_supported
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
data_collator = DataCollatorForSeq2Seq(tokenizer = tokenizer),
dataset_num_proc = 2,
packing = False, # Can make training 5x faster for short sequences.
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
# num_train_epochs = 1, # Set this for 1 full training run.
max_steps = 60,
learning_rate = 2e-4,
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
report_to = "none", # Use this for WandB etc
),
)
#@title Show current memory stats
gpu_stats = torch.cuda.get_device_properties(0)
start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)
print(f"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.")
print(f"{start_gpu_memory} GB of memory reserved.")
GPU = NVIDIA A100-SXM4-80GB MIG 2g.20gb. Max memory = 19.5 GB.
8.436 GB of memory reserved.
trainer_stats = trainer.train()
#@title Show final memory and time stats
used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)
used_memory_for_lora = round(used_memory - start_gpu_memory, 3)
used_percentage = round(used_memory /max_memory*100, 3)
lora_percentage = round(used_memory_for_lora/max_memory*100, 3)
print(f"{trainer_stats.metrics['train_runtime']} seconds used for training.")
print(f"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.")
print(f"Peak reserved memory = {used_memory} GB.")
print(f"Peak reserved memory for training = {used_memory_for_lora} GB.")
print(f"Peak reserved memory % of max memory = {used_percentage} %.")
print(f"Peak reserved memory for training % of max memory = {lora_percentage} %.")
135.3865 seconds used for training.
2.26 minutes used for training.
Peak reserved memory = 8.834 GB.
Peak reserved memory for training = 0.398 GB.
Peak reserved memory % of max memory = 45.303 %.
Peak reserved memory for training % of max memory = 2.041 %.
Inference#
Let’s run the model! You can change the instruction and input - leave the output blank!
from unsloth.chat_templates import get_chat_template
tokenizer = get_chat_template(
tokenizer,
chat_template = "mistral",
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
messages = [
{"role": "user", "content": "I saw an oak full of acorns."},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
use_cache = True, temperature = 1.5, min_p = 0.1)
To see an oak full of acorns, denotes that you will be successful in your undertakings.</s>
Saving, loading finetuned models#
To save the final model as LoRA adapters, either use Huggingface’s push_to_hub
for an online save or save_pretrained
for a local save.
[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!
model.save_pretrained("lora_model_dreams") # Local saving
tokenizer.save_pretrained("lora_model_dreams")
# model.push_to_hub("your_name/lora_model", token = "...") # Online saving
# tokenizer.push_to_hub("your_name/lora_model", token = "...") # Online saving
('lora_model_dreams/tokenizer_config.json',
'lora_model_dreams/special_tokens_map.json',
'lora_model_dreams/tokenizer.model',
'lora_model_dreams/added_tokens.json',
'lora_model_dreams/tokenizer.json')
Now if you want to load the LoRA adapters we just saved for inference, set False
to True
:
if True:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
messages = [
{"role": "user", "content": "I was sitting in a room."},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
use_cache = True, temperature = 1.5, min_p = 0.1)
==((====))== Unsloth 2025.4.7: Fast Mistral patching. Transformers: 4.51.3.
\\ /| NVIDIA A100-SXM4-80GB MIG 2g.20gb. Num GPUs = 1. Max memory: 19.5 GB. Platform: Linux.
O^O/ \_/ \ Torch: 2.7.0+cu126. CUDA: 8.0. CUDA Toolkit: 12.6. Triton: 3.3.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.30. FA2 = False]
"-____-" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth 2025.4.7 patched 40 layers with 40 QKV layers, 40 O layers and 40 MLP layers.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
To dream that you are sitting in a room, denotes that you will be in danger of losing your position through your own carelessness.
</s>