Notebooks
U
Unsloth
Kaggle Llama3.2 (1B) RAFT

Kaggle Llama3.2 (1B) RAFT

unsloth-notebooksunslothnb

To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!

Join Discord if you need help + ⭐ Star us on Github

To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.

You will learn how to do data prep, how to train, how to run the model, & how to save it

News

Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog

You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog

Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog

3x faster LLM training with 30% less VRAM and 500K context. 3x faster500K Context

New in Reinforcement Learning: FP8 RLVision RLStandbygpt-oss RL

Visit our docs for all our model uploads and notebooks.

Installation

[ ]
[ ]

Unsloth

Retrieval Augmented Finetuning (RAFT) Cookbook Recipe!

This cookbook aims to show how to use Unsloth to use retrieval augmented finetuning (RAFT). Supervised finetuning is like a closed-book examination where we encode knowledge from the training dataset into the LLM during finetuning, and then test it on unseen examples in the "exam".

RAFT differs from this in that it is an open-book exam format of finetuning! We allow the LLM to see not just the question and answer (in chain-of-thought format), but also the contexts. The hope is that the LLM will be able to acquire the domain knowledge, but also an improved ability to synthesize answers from context.

Reference: RAFT: Adapting Language Model to Domain Specific RAG

Code Setup

First, let's setting up the OPENAI API KEY so that we can use the OpenAI LLMs.

[ ]

Next, we'll set up LlamaIndex. This involves configuring the language model (LLM) and embedding model that LlamaIndex will use. We'll be using OpenAI's gpt-4o as our LLM and text-embedding-ada-002 as our embedding model.

[ ]

Ingest documents

We'll use the following code to download a research paper and then load it using SimpleDirectoryReader. This will be the data we use for our retrieval augmented finetuning.

[ ]
Loading files:   0%|          | 0/1 [00:00<?, ?file/s]Loading files: 100%|██████████| 1/1 [00:00<00:00,  1.19file/s]

Retrieval Augmented Finetuning

Getting the RAFT dataset

LlamaIndex has very kindly adapted the source code of the RAFT repository and made it even easier to generate your own RAFT dataset. Just point it to your filepath.t

Reference: RAFTDatasetPack

[14]

This cell takes quite long to run! Go have a coffee ☕

It took 19 minutes for the cell to finish running

[15]

Let's have a look!

[18]
[24]
[27]
[16]
Creating json from Arrow format:   0%|          | 0/1 [00:00<?, ?ba/s]
2966201

Training the LLM

Our dataset is a HuggingFace Dataset object, so we can leverage the abstraction's advantage to do a train-test split

[19]
[20]
(Dataset({
,     features: ['id', 'type', 'question', 'context', 'oracle_context', 'cot_answer', 'instruction'],
,     num_rows: 301
, }),
, Dataset({
,     features: ['id', 'type', 'question', 'context', 'oracle_context', 'cot_answer', 'instruction'],
,     num_rows: 34
, }))

Now let's get the model!

[ ]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
INFO 05-21 06:09:36 [importing.py:53] Triton module has been replaced with a placeholder.
INFO 05-21 06:09:36 [__init__.py:239] Automatically detected platform cuda.
==((====))==  Unsloth 2025.4.7: Fast Llama patching. Transformers: 4.51.3. vLLM: 0.8.5.post1.
   \\   /|    NVIDIA A10G. Num GPUs = 1. Max memory: 22.184 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.6.0+cu124. CUDA: 8.6. CUDA Toolkit: 12.4. Triton: 3.2.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.29.post2. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
[22]
Unsloth 2025.4.7 patched 16 layers with 16 QKV layers, 16 O layers and 16 MLP layers.

Formatting the prompts

We need to put everything together into a single 'text' field for the LLM to be trained on. According to the RAFT paper, we add the context along with the question and chain-of-thought answer in a bid to help our LLM learn how to use the context to answer the question. Let's do that!

[25]
Map:   0%|          | 0/301 [00:00<?, ? examples/s]
Map:   0%|          | 0/34 [00:00<?, ? examples/s]

Let's take a look at what we just did!

[26]

And now we finally get to training!

[28]
Unsloth: Tokenizing ["text"] (num_proc=4):   0%|          | 0/301 [00:00<?, ? examples/s]
Unsloth: Tokenizing ["text"] (num_proc=4):   0%|          | 0/34 [00:00<?, ? examples/s]

Current memory statistics

[29]
GPU = NVIDIA A10G. Max memory = 22.184 GB.
1.457 GB of memory reserved.
[30]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 301 | Num Epochs = 5 | Total steps = 185
O^O/ \_/ \    Batch size per device = 1 | Gradient accumulation steps = 8
\        /    Data Parallel GPUs = 1 | Total batch size (1 x 8 x 1) = 8
 "-____-"     Trainable parameters = 11,272,192/1,000,000,000 (1.13% trained)
wandb: Currently logged in as: tituslhy to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
Unsloth: Will smartly offload gradients to save VRAM!
Unsloth: Not an error, but LlamaForCausalLM does not accept `num_items_in_batch`.
Using gradient accumulation will be very slightly less accurate.
Read more on gradient accumulation issues here: https://unsloth.ai/blog/gradient

Used memory statistics

[ ]
637.9309 seconds used for training.
10.63 minutes used for training.
Peak reserved memory = 2.156 GB.
Peak reserved memory for training = 0.699 GB.
Peak reserved memory % of max memory = 9.719 %.
Peak reserved memory for training % of max memory = 3.151 %.

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our docs page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.

[NEW] To finetune and auto export to Ollama, try our Ollama notebook

[ ]

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
  2. Learn how to do Reinforcement Learning with our RL Guide and notebooks.
  3. Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
  4. Explore our LLM Tutorials Directory to find dedicated guides for each model.
  5. Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.

Join Discord if you need help + ⭐️ Star us on Github ⭐️

This notebook and all Unsloth notebooks are licensed LGPL-3.0