Notebooks
U
Unsloth
FunctionGemma (270M) Multi Turn Tool Calling

FunctionGemma (270M) Multi Turn Tool Calling

unsloth-notebooksunslothnb

Open In Colab

To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!

Join Discord if you need help + ⭐ Star us on Github

To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.

You will learn how to do data prep, how to train, how to run the model, & how to save it

News

Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog

You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog

Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog

3x faster LLM training with 30% less VRAM and 500K context. 3x faster500K Context

New in Reinforcement Learning: FP8 RLVision RLStandbygpt-oss RL

Visit our docs for all our model uploads and notebooks.

Installation

[1]

Unsloth

[3]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2025.12.6: Fast Gemma3 patching. Transformers: 4.57.3.
   \\   /|    Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.9.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.5.0
\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.33.post1. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Using float16 precision for gemma3 won't work! Using float32.
Unsloth: Gemma3 does not support SDPA - switching to fast eager.
model.safetensors:   0%|          | 0.00/536M [00:00<?, ?B/s]
generation_config.json:   0%|          | 0.00/176 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/1.17M [00:00<?, ?B/s]
tokenizer.model:   0%|          | 0.00/4.69M [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/33.4M [00:00<?, ?B/s]
added_tokens.json:   0%|          | 0.00/63.0 [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/714 [00:00<?, ?B/s]
chat_template.jinja:   0%|          | 0.00/13.9k [00:00<?, ?B/s]

Define tools for FunctionGemma

We'll define multiple tools which FunctionGemma can call, including the below:

	get_today_date
get_current_weather
add_numbers
multiply_numbers

[4]

We save all the functions to a mapping and as tools

[5]

We then make some parsing code for FunctionGemma which we hide

[6]

Let's call the model!

[7]
The current date is 18 December 2025.
[8]
The current weather in San Francisco is:

*   **Temperature:** 15
*   **Weather:** Sunny
[9]
The current weather in Sydney, Australia is:

*   **Temperature:** 25
*   **Weather:** Cloudy
[10]
The sum of 112358 and 123456 is 235814.0

Final Result: 235814.0
[11]
The product of 112358 and 123456 is 13871269248.0.
[12]
The sum of 2 and 231.111 is 233.111.

Final Result: 233.111

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Train your own reasoning model - Llama GRPO notebook Free Colab
  2. Saving finetunes to Ollama. Free notebook
  3. Llama 3.2 Vision finetuning - Radiography use case. Free Colab
  4. See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!

Join Discord if you need help + ⭐️ Star us on Github ⭐️

This notebook and all Unsloth notebooks are licensed LGPL-3.0.