Llama3 (8B) Conversational
To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
config.json: 0%| | 0.00/1.15k [00:00<?, ?B/s]
==((====))== Unsloth: Fast Llama patching release 2024.5 \\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.3.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. Xformers = 0.0.26.post1. FA = False. "-____-" Free Apache license: http://github.com/unslothai/unsloth
model.safetensors: 0%| | 0.00/5.70G [00:00<?, ?B/s]
generation_config.json: 0%| | 0.00/131 [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/51.1k [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/9.09M [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/459 [00:00<?, ?B/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
Unsloth 2024.5 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.
Data Prep
We now use the Llama-3 format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style. Llama-3 renders multi turn conversations like below:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Hello!<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Hey there! How are you?<|eot_id|><|start_header_id|>user<|end_header_id|>
I'm great thanks!<|eot_id|>
[NOTE] To train only on completions (ignoring the user's input) read our docs here
We use our get_chat_template function to get the correct chat template. We support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and our own optimized unsloth template.
Note ShareGPT uses {"from": "human", "value" : "Hi"} and not {"role": "user", "content" : "Hi"}, so we use mapping to map it.
For text completions like novel writing, try this notebook.
Downloading readme: 0%| | 0.00/442 [00:00<?, ?B/s]
Downloading data: 0%| | 0.00/8.24M [00:00<?, ?B/s]
Generating train split: 0%| | 0/9033 [00:00<?, ? examples/s]
Map: 0%| | 0/9033 [00:00<?, ? examples/s]
Let's see how the Llama-3 format works by printing the 5th element
[{'from': 'human',
, 'value': 'What is the typical wattage of bulb in a lightbox?'},
, {'from': 'gpt',
, 'value': 'The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.'},
, {'from': 'human',
, 'value': 'Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.'}] <|begin_of_text|><|start_header_id|>user<|end_header_id|> What is the typical wattage of bulb in a lightbox?<|eot_id|><|start_header_id|>assistant<|end_header_id|> The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.<|eot_id|><|start_header_id|>user<|end_header_id|> Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.<|eot_id|>
If you're looking to make your own chat template, that also is possible! You must use the Jinja templating regime. We provide our own stripped down version of the Unsloth template which we find to be more efficient, and leverages ChatML, Zephyr and Alpaca styles.
More info on chat templates on our wiki page!
/usr/local/lib/python3.10/dist-packages/multiprocess/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock. self.pid = os.fork()
Map (num_proc=2): 0%| | 0/9033 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
GPU = Tesla T4. Max memory = 14.748 GB. 5.594 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 9,033 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4 \ / Total batch size = 8 | Total steps = 60 "-____-" Number of trainable parameters = 41,943,040
804.0029 seconds used for training. 13.4 minutes used for training. Peak reserved memory = 9.549 GB. Peak reserved memory for training = 3.955 GB. Peak reserved memory % of max memory = 64.748 %. Peak reserved memory for training % of max memory = 26.817 %.
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
['<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nContinue the fibonacci sequence: 1, 1, 2, 3, 5, 8,<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n13, 21, 34, 55, 89, 144, 233, 377, 610, 985, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025']
You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
<|begin_of_text|><|start_header_id|>user<|end_header_id|> Continue the fibonacci sequence: 1, 1, 2, 3, 5, 8,<|eot_id|><|start_header_id|>assistant<|end_header_id|> 13, 21, 34, 55, 89, 144, 233, 377, 610, 985, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 632459
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
<|begin_of_text|><|start_header_id|>user<|end_header_id|> What is a famous tall tower in Paris?<|eot_id|><|start_header_id|>assistant<|end_header_id|> The Eiffel Tower is a famous tall tower in Paris.<|eot_id|>
You can also use Hugging Face's AutoPeftModelForCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our docs page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
- Learn how to do Reinforcement Learning with our RL Guide and notebooks.
- Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
- Explore our LLM Tutorials Directory to find dedicated guides for each model.
- Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.



