Qwen3 Embedding (4B)
To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster!
config.json: 0%| | 0.00/727 [00:00<?, ?B/s]
modules.json: 0%| | 0.00/349 [00:00<?, ?B/s]
==((====))== Unsloth 2025.12.8: Fast Qwen3 patching. Transformers: 4.57.3. \\ /| Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.9.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.5.0 \ / Bfloat16 = FALSE. FA [Xformers = 0.0.33.post1. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors.index.json: 0.00B [00:00, ?B/s]
model-00001-of-00002.safetensors: 0%| | 0.00/4.97G [00:00<?, ?B/s]
model-00002-of-00002.safetensors: 0%| | 0.00/3.08G [00:00<?, ?B/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
vocab.json: 0.00B [00:00, ?B/s]
merges.txt: 0.00B [00:00, ?B/s]
tokenizer.json: 0%| | 0.00/11.4M [00:00<?, ?B/s]
config.json: 0%| | 0.00/313 [00:00<?, ?B/s]
We now add LoRA adapters so we only need to update a small amount of parameters!
Unsloth: Making `model.base_model.model` require gradients
README.md: 0%| | 0.00/512 [00:00<?, ?B/s]
dataset.jsonl: 0.00B [00:00, ?B/s]
Generating train split: 0%| | 0/106628 [00:00<?, ? examples/s]
Let's take a look at the dataset structure:
Dataset examples:
{'anchor': '.308', 'positive': 'The .308 Winchester is a popular rifle cartridge used for hunting and target shooting.'}
{'anchor': '.308', 'positive': 'Many precision rifles are chambered in .308 for its excellent long-range accuracy.'}
{'anchor': '.308', 'positive': 'The sniper selected a .308 caliber round for the mission.'}
{'anchor': '.338 lapua', 'positive': 'The .338 Lapua Magnum is a high-powered cartridge designed for extreme long-range shooting.'}
{'anchor': '.338 lapua', 'positive': 'Military snipers often use .338 Lapua for engagements beyond 1000 meters.'}
{'anchor': '.338 lapua', 'positive': 'The rifle was chambered in .338 Lapua for maximum effective range.'}
Baseline Inference
Let's test the base model before fine-tuning to see how it performs on our specific domain.
--- Pre-Training Results for query: 'apexification' --- 0.7372 | induces root tip closure in non-vital teeth 0.4884 | a brick left by Yuki 0.4870 | a plant hormone for regulating stress responses 0.4232 | apples are a tasty treat 0.4218 | the weed whacker uses an engine that runs on a mixture of gas and oil 0.3168 | a type of cancer treatment that uses drugs to boost the body's immune response
Train the model
Now let's train our model. We use MultipleNegativesRankingLoss
This loss function uses other positives in the same batch as negative examples, which is efficient for contrastive learning.
We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None.
Computing widget examples: 0%| | 0/1 [00:00<?, ?example/s]
GPU = Tesla T4. Max memory = 14.741 GB. 7.912 GB of memory reserved.
Let's train the model! To resume a training run, set trainer.train(resume_from_checkpoint = True)
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1 \\ /| Num examples = 106,628 | Num Epochs = 1 | Total steps = 60 O^O/ \_/ \ Batch size per device = 256 | Gradient accumulation steps = 1 \ / Data Parallel GPUs = 1 | Total batch size (256 x 1 x 1) = 256 "-____-" Trainable parameters = 66,060,288 of 4,087,834,624 (1.62% trained)
506.3595 seconds used for training. 8.44 minutes used for training. Peak reserved memory = 11.127 GB. Peak reserved memory for training = 3.215 GB. Peak reserved memory % of max memory = 75.483 %. Peak reserved memory for training % of max memory = 21.81 %.
--- Post-Training Results for query: 'apexification' --- 0.6654 | induces root tip closure in non-vital teeth 0.0687 | a type of cancer treatment that uses drugs to boost the body's immune response 0.0643 | a brick left by Yuki 0.0517 | a plant hormone for regulating stress responses 0.0267 | apples are a tasty treat -0.0393 | the weed whacker uses an engine that runs on a mixture of gas and oil
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our docs page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
- Learn how to do Reinforcement Learning with our RL Guide and notebooks.
- Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
- Explore our LLM Tutorials Directory to find dedicated guides for each model.
- Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.



