To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
FastModel supports loading nearly any model now! This includes Vision and Text models!
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster! ==((====))== Unsloth 2025.8.9: Fast Gemma3 patching. Transformers: 4.55.2. \\ /| Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.8.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.4.0 \ / Bfloat16 = FALSE. FA [Xformers = 0.0.32.post2. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored! Unsloth: Using float16 precision for gemma3 won't work! Using float32. Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.
model.safetensors: 0%| | 0.00/536M [00:00<?, ?B/s]
generation_config.json: 0%| | 0.00/233 [00:00<?, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.model: 0%| | 0.00/4.69M [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/33.4M [00:00<?, ?B/s]
added_tokens.json: 0%| | 0.00/35.0 [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/670 [00:00<?, ?B/s]
chat_template.jinja: 0.00B [00:00, ?B/s]
We now add LoRA adapters so we only need to update a small amount of parameters!
Unsloth: Making `model.base_model.model.model` require gradients
Data Prep
We now use the Gemma-3 format for conversation style finetunes. We use Thytu's ChessInstruct dataset. Gemma-3 renders multi turn conversations like below:
<bos><start_of_turn>user
Hello!<end_of_turn>
<start_of_turn>model
Hey there!<end_of_turn>
We use our get_chat_template function to get the correct chat template. We support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, phi3, llama3, phi4, qwen2.5, gemma3 and more.
README.md: 0.00B [00:00, ?B/s]
train.csv: 0%| | 0.00/161M [00:00<?, ?B/s]
test.csv: 0%| | 0.00/1.63M [00:00<?, ?B/s]
Generating train split: 0%| | 0/99000 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
We now use convert_to_chatml to try converting datasets to the correct format for finetuning purposes!
Map: 0%| | 0/10000 [00:00<?, ? examples/s]
Let's see how row 100 looks like!
{'task': "Given an incomplit set of chess moves and the game's final score, write the last missing chess move.\n\nInput Format: A comma-separated list of chess moves followed by the game score.\nOutput Format: The missing chess move",
, 'input': '{"moves": ["c2c4", "g8f6", "b1c3", "c7c5", "g1f3", "e7e6", "e2e3", "d7d5", "d2d4", "b8c6", "c4d5", "e6d5", "f1e2", "c5c4", "c1d2", "f8b4", "a1c1", "e8g8", "b2b3", "b4a3", "c1b1", "c8f5", "b3c4", "f5b1", "d1b1", "d5c4", "e2c4", "a3b4", "e1g1", "a8c8", "f1d1", "d8a5", "c3e4", "f6e4", "b1e4", "b4d2", "f3d2", "c8c7", "d2f3", "c6b8", "c4b3", "b8d7", "e4f4", "c7c3", "e3e4", "a5b5", "e4e5", "a7a5", "f4e4", "a5a4", "b3d5", "h7h6", "d1b1", "b5d3", "e4d3", "c3d3", "e5e6", "d7f6", "e6f7", "g8h7", "d5e6", "g7g6", "h2h4", "f6e4", "b1b7", "h7g7", "b7a7", "d3d1", "g1h2", "e4f2", "a7a4", "d1h1", "h2g3", "f2e4", "g3f4", "e4d6", "f3e5", "h1h4", "f4e3", "d6f5", "e3d3", "f8d8", "e5d7", "h4g4", "f7f8b", "d8f8", "d7f8", "g7f8", "e6d5", "g4g3", "d3e4", "g3g2", "e4e5", "g2d2", "a4a8", "f8e7", "a8a7", "e7d8", "d5e6", "d2e2", "e5f6", "e2f2", "a7d7", "d8e8", "f6g6", "f5h4", "g6g7", "f2g2", "g7h7", "h4f3", "h7h6", "g2a2", "d4d5", "a2h2", "h6g7", "f3g5", "g7f6", "g5e4", "f6e5", "e4c3", "d7b7", "h2e2", "e5d4", "c3d5", "d4d5", "e2d2", "d5e5", "d2f2", "e6d5", "e8f8", "d5e6", "f2h2", "b7c7", "h2h5", "e6f5", "f8e8", "e5d6", "h5h6", "f5e6", "e8f8", "c7f7", "f8g8", "d6e5", "h6g6", "e5d5", "g8h8", "d5d6", "g6g5", "d6e7", "g5g7", "e7f6", "g7f7", "?"], "result": "1/2-1/2"}',
, 'expected_output': '{"missing move": "e6f7"}',
, 'KIND': 'FIND_LAST_MOVE',
, 'conversations': [{'content': "Given an incomplit set of chess moves and the game's final score, write the last missing chess move.\n\nInput Format: A comma-separated list of chess moves followed by the game score.\nOutput Format: The missing chess move",
, 'role': 'system'},
, {'content': '{"moves": ["c2c4", "g8f6", "b1c3", "c7c5", "g1f3", "e7e6", "e2e3", "d7d5", "d2d4", "b8c6", "c4d5", "e6d5", "f1e2", "c5c4", "c1d2", "f8b4", "a1c1", "e8g8", "b2b3", "b4a3", "c1b1", "c8f5", "b3c4", "f5b1", "d1b1", "d5c4", "e2c4", "a3b4", "e1g1", "a8c8", "f1d1", "d8a5", "c3e4", "f6e4", "b1e4", "b4d2", "f3d2", "c8c7", "d2f3", "c6b8", "c4b3", "b8d7", "e4f4", "c7c3", "e3e4", "a5b5", "e4e5", "a7a5", "f4e4", "a5a4", "b3d5", "h7h6", "d1b1", "b5d3", "e4d3", "c3d3", "e5e6", "d7f6", "e6f7", "g8h7", "d5e6", "g7g6", "h2h4", "f6e4", "b1b7", "h7g7", "b7a7", "d3d1", "g1h2", "e4f2", "a7a4", "d1h1", "h2g3", "f2e4", "g3f4", "e4d6", "f3e5", "h1h4", "f4e3", "d6f5", "e3d3", "f8d8", "e5d7", "h4g4", "f7f8b", "d8f8", "d7f8", "g7f8", "e6d5", "g4g3", "d3e4", "g3g2", "e4e5", "g2d2", "a4a8", "f8e7", "a8a7", "e7d8", "d5e6", "d2e2", "e5f6", "e2f2", "a7d7", "d8e8", "f6g6", "f5h4", "g6g7", "f2g2", "g7h7", "h4f3", "h7h6", "g2a2", "d4d5", "a2h2", "h6g7", "f3g5", "g7f6", "g5e4", "f6e5", "e4c3", "d7b7", "h2e2", "e5d4", "c3d5", "d4d5", "e2d2", "d5e5", "d2f2", "e6d5", "e8f8", "d5e6", "f2h2", "b7c7", "h2h5", "e6f5", "f8e8", "e5d6", "h5h6", "f5e6", "e8f8", "c7f7", "f8g8", "d6e5", "h6g6", "e5d5", "g8h8", "d5d6", "g6g5", "d6e7", "g5g7", "e7f6", "g7f7", "?"], "result": "1/2-1/2"}',
, 'role': 'user'},
, {'content': '{"missing move": "e6f7"}', 'role': 'assistant'}]} We now have to apply the chat template for Gemma3 onto the conversations, and save it to text.
Map: 0%| | 0/10000 [00:00<?, ? examples/s]
Let's see how the chat template did!
'<start_of_turn>user\nGiven an incomplit set of chess moves and the game\'s final score, write the last missing chess move.\n\nInput Format: A comma-separated list of chess moves followed by the game score.\nOutput Format: The missing chess move\n\n{"moves": ["c2c4", "g8f6", "b1c3", "c7c5", "g1f3", "e7e6", "e2e3", "d7d5", "d2d4", "b8c6", "c4d5", "e6d5", "f1e2", "c5c4", "c1d2", "f8b4", "a1c1", "e8g8", "b2b3", "b4a3", "c1b1", "c8f5", "b3c4", "f5b1", "d1b1", "d5c4", "e2c4", "a3b4", "e1g1", "a8c8", "f1d1", "d8a5", "c3e4", "f6e4", "b1e4", "b4d2", "f3d2", "c8c7", "d2f3", "c6b8", "c4b3", "b8d7", "e4f4", "c7c3", "e3e4", "a5b5", "e4e5", "a7a5", "f4e4", "a5a4", "b3d5", "h7h6", "d1b1", "b5d3", "e4d3", "c3d3", "e5e6", "d7f6", "e6f7", "g8h7", "d5e6", "g7g6", "h2h4", "f6e4", "b1b7", "h7g7", "b7a7", "d3d1", "g1h2", "e4f2", "a7a4", "d1h1", "h2g3", "f2e4", "g3f4", "e4d6", "f3e5", "h1h4", "f4e3", "d6f5", "e3d3", "f8d8", "e5d7", "h4g4", "f7f8b", "d8f8", "d7f8", "g7f8", "e6d5", "g4g3", "d3e4", "g3g2", "e4e5", "g2d2", "a4a8", "f8e7", "a8a7", "e7d8", "d5e6", "d2e2", "e5f6", "e2f2", "a7d7", "d8e8", "f6g6", "f5h4", "g6g7", "f2g2", "g7h7", "h4f3", "h7h6", "g2a2", "d4d5", "a2h2", "h6g7", "f3g5", "g7f6", "g5e4", "f6e5", "e4c3", "d7b7", "h2e2", "e5d4", "c3d5", "d4d5", "e2d2", "d5e5", "d2f2", "e6d5", "e8f8", "d5e6", "f2h2", "b7c7", "h2h5", "e6f5", "f8e8", "e5d6", "h5h6", "f5e6", "e8f8", "c7f7", "f8g8", "d6e5", "h6g6", "e5d5", "g8h8", "d5d6", "g6g5", "d6e7", "g5g7", "e7f6", "g7f7", "?"], "result": "1/2-1/2"}<end_of_turn>\n<start_of_turn>model\n{"missing move": "e6f7"}<end_of_turn>\n' Unsloth: Switching to float32 training since model cannot work with float16
Unsloth: Tokenizing ["text"] (num_proc=2): 0%| | 0/10000 [00:00<?, ? examples/s]
We also use Unsloth's train_on_completions method to only train on the assistant outputs and ignore the loss on the user's inputs. This helps increase accuracy of finetunes!
Map (num_proc=2): 0%| | 0/10000 [00:00<?, ? examples/s]
Let's verify masking the instruction part is done! Let's print the 100th row again.
'<bos><start_of_turn>user\nGiven an incomplit set of chess moves and the game\'s final score, write the last missing chess move.\n\nInput Format: A comma-separated list of chess moves followed by the game score.\nOutput Format: The missing chess move\n\n{"moves": ["c2c4", "g8f6", "b1c3", "c7c5", "g1f3", "e7e6", "e2e3", "d7d5", "d2d4", "b8c6", "c4d5", "e6d5", "f1e2", "c5c4", "c1d2", "f8b4", "a1c1", "e8g8", "b2b3", "b4a3", "c1b1", "c8f5", "b3c4", "f5b1", "d1b1", "d5c4", "e2c4", "a3b4", "e1g1", "a8c8", "f1d1", "d8a5", "c3e4", "f6e4", "b1e4", "b4d2", "f3d2", "c8c7", "d2f3", "c6b8", "c4b3", "b8d7", "e4f4", "c7c3", "e3e4", "a5b5", "e4e5", "a7a5", "f4e4", "a5a4", "b3d5", "h7h6", "d1b1", "b5d3", "e4d3", "c3d3", "e5e6", "d7f6", "e6f7", "g8h7", "d5e6", "g7g6", "h2h4", "f6e4", "b1b7", "h7g7", "b7a7", "d3d1", "g1h2", "e4f2", "a7a4", "d1h1", "h2g3", "f2e4", "g3f4", "e4d6", "f3e5", "h1h4", "f4e3", "d6f5", "e3d3", "f8d8", "e5d7", "h4g4", "f7f8b", "d8f8", "d7f8", "g7f8", "e6d5", "g4g3", "d3e4", "g3g2", "e4e5", "g2d2", "a4a8", "f8e7", "a8a7", "e7d8", "d5e6", "d2e2", "e5f6", "e2f2", "a7d7", "d8e8", "f6g6", "f5h4", "g6g7", "f2g2", "g7h7", "h4f3", "h7h6", "g2a2", "d4d5", "a2h2", "h6g7", "f3g5", "g7f6", "g5e4", "f6e5", "e4c3", "d7b7", "h2e2", "e5d4", "c3d5", "d4d5", "e2d2", "d5e5", "d2f2", "e6d5", "e8f8", "d5e6", "f2h2", "b7c7", "h2h5", "e6f5", "f8e8", "e5d6", "h5h6", "f5e6", "e8f8", "c7f7", "f8g8", "d6e5", "h6g6", "e5d5", "g8h8", "d5d6", "g6g5", "d6e7", "g5g7", "e7f6", "g7f7", "?"], "result": "1/2-1/2"}<end_of_turn>\n<start_of_turn>model\n{"missing move": "e6f7"}<end_of_turn>\n' Now let's print the masked out example - you should see only the answer is present:
' {"missing move": "e6f7"}<end_of_turn>\n' GPU = Tesla T4. Max memory = 14.741 GB. 0.832 GB of memory reserved.
Let's train the model! To resume a training run, set trainer.train(resume_from_checkpoint = True)
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1 \\ /| Num examples = 10,000 | Num Epochs = 1 | Total steps = 100 O^O/ \_/ \ Batch size per device = 8 | Gradient accumulation steps = 1 \ / Data Parallel GPUs = 1 | Total batch size (8 x 1 x 1) = 8 "-____-" Trainable parameters = 30,375,936 of 298,474,112 (10.18% trained)
Unsloth: Will smartly offload gradients to save VRAM!
497.5278 seconds used for training. 8.29 minutes used for training. Peak reserved memory = 4.268 GB. Peak reserved memory for training = 3.436 GB. Peak reserved memory % of max memory = 28.953 %. Peak reserved memory for training % of max memory = 23.309 %.
<bos><start_of_turn>user
Given an incomplit set of chess moves and the game's final score, write the last missing chess move.
Input Format: A comma-separated list of chess moves followed by the game score.
Output Format: The missing chess move
{"moves": ["e2e4", "c7c5", "g1f3", "e7e6", "d2d4", "c5d4", "f3d4", "g8f6", "b1c3", "f8b4", "e4e5", "f6e4", "d1g4", "e4c3", "g4g7", "h8f8", "a2a3", "c3b5", "a3b4", "b5d4", "c1g5", "d8b6", "g5h6", "d4c2", "e1d1", "b6b4", "d1c2", "b8c6", "f1e2", "d7d5", "g7f8", "b4f8", "h6f8", "e8f8", "f2f4", "a7a5", "a1a3", "c8d7", "h1d1", "c6e7", "c2d2", "a5a4", "d1c1", "d7c6", "g2g3", "a8b8", "c1c5", "b7b5", "b2b4", "a4b3", "a3b3", "b8a8", "b3b2", "a8a3", "b2c2", "c6e8", "c5c3", "a3a4", "c3c7", "a4a1", "c7b7", "a1h1", "b7b8", "b5b4", "e2b5", "b4b3", "c2c1", "h1h2", "d2c3", "e7f5", "b5e8", "b3b2", "e8a4", "f8g7", "b8b2", "h2h3", "b2b7", "h3g3", "c3b2", "g3g2", "b2b1", "f5d4", "a4e8", "d4e2", "e8f7", "e2c1", "f7e6", "g7f8", "b1c1", "g2e2", "b7f7", "f8e8", "e6d5", "h7h6", "e5e6", "e8d8", "c1d1", "e2b2", "d5c6", "b2b1", "d1d2", "b1b2", "d2e3", "b2b3", "e3e4", "b3b4", "e4e5", "b4b1", "e6e7", "d8c7", "e7e8q", "c7b6", "e8b8", "b6c5", "b8b1", "c5c4", "b1c2", "c4b4", "f7b7", "b4a3", "?"], "result": "1-0"}<end_of_turn>
<start_of_turn>model
{"missing move": "a3a4"}<end_of_turn>
('gemma-3/tokenizer_config.json',
, 'gemma-3/special_tokens_map.json',
, 'gemma-3/chat_template.jinja',
, 'gemma-3/tokenizer.model',
, 'gemma-3/added_tokens.json',
, 'gemma-3/tokenizer.json') Now if you want to load the LoRA adapters we just saved for inference, set False to True:
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now for all models! For now, you can convert easily to Q8_0, F16 or BF16 precision. Q4_K_M for 4bit will come later!
Likewise, if you want to instead push to GGUF to your Hugging Face account, set if False to if True and add your Hugging Face token and upload location!
Now, use the gemma-3-finetune.gguf file or gemma-3-finetune-Q4_K_M.gguf file in llama.cpp.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Train your own reasoning model - Llama GRPO notebook Free Colab
- Saving finetunes to Ollama. Free notebook
- Llama 3.2 Vision finetuning - Radiography use case. Free Colab
- See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!
This notebook and all Unsloth notebooks are licensed LGPL-3.0.



