Notebooks
U
Unsloth
Kaggle Pixtral (12B) Vision

Kaggle Pixtral (12B) Vision

unsloth-notebooksunslothnb

To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!

Join Discord if you need help + ⭐ Star us on Github

To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.

You will learn how to do data prep, how to train, how to run the model, & how to save it

News

Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog

You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog

Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog

3x faster LLM training with 30% less VRAM and 500K context. 3x faster500K Context

New in Reinforcement Learning: FP8 RLVision RLStandbygpt-oss RL

Visit our docs for all our model uploads and notebooks.

Installation

[ ]

Unsloth

[ ]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2024.11.9: Fast Pixtral vision patching. Transformers = 4.46.2.
   \\   /|    GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.
O^O/ \_/ \    Pytorch: 2.5.1+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.
\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.28.post3. FA2 = False]
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors.index.json:   0%|          | 0.00/316k [00:00<?, ?B/s]
Downloading shards:   0%|          | 0/2 [00:00<?, ?it/s]
model-00001-of-00002.safetensors:   0%|          | 0.00/4.97G [00:00<?, ?B/s]
model-00002-of-00002.safetensors:   0%|          | 0.00/3.57G [00:00<?, ?B/s]
Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]
generation_config.json:   0%|          | 0.00/133 [00:00<?, ?B/s]
processor_config.json:   0%|          | 0.00/162 [00:00<?, ?B/s]
chat_template.json:   0%|          | 0.00/1.59k [00:00<?, ?B/s]
preprocessor_config.json:   0%|          | 0.00/483 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/177k [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/17.1M [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/552 [00:00<?, ?B/s]

We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.

[NEW] We also support finetuning ONLY the vision part of the model, or ONLY the language part. Or you can select both! You can also select to finetune the attention or the MLP layers!

[ ]

Data Prep

We'll be using a sampled dataset of general question and answering pairs. These datasets should be mixed with other finetunes to not make the model forget general tasks.

You can access the dataset here. The full dataset is here.

[ ]
README.md:   0%|          | 0.00/728 [00:00<?, ?B/s]
train-00000-of-00001.parquet:   0%|          | 0.00/357M [00:00<?, ?B/s]
test-00000-of-00001.parquet:   0%|          | 0.00/57.6M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/8552 [00:00<?, ? examples/s]
Generating test split:   0%|          | 0/1364 [00:00<?, ? examples/s]

To format the dataset, all vision finetuning tasks should be formatted as follows:

	[
{ "role": "user",
  "content": [{"type": "text",  "text": Q}, {"type": "image", "image": image} ]
},
{ "role": "assistant",
  "content": [{"type": "text",  "text": A} ]
},
]

Let's take an overview look at the dataset. We shall see what the 3rd image is, and what caption it had.

[ ]
Dataset({
,    features: ['messages', 'images'],
,    num_rows: 8552
,})
[ ]
Output
[ ]
[{'content': [{'index': 0, 'text': None, 'type': 'image'},
,   {'index': None,
,    'text': '\nWhat makes the train in the image unique compared to other trains?',
,    'type': 'text'}],
,  'role': 'user'},
, {'content': [{'index': None,
,    'text': 'What sets the train in the image apart from other trains is the presence of a distinctive graffiti on the side of it. This graffiti is a rendition of Edvard Munch\'s famous painting, "The Scream." This street art adds a unique artistic and unconventional appearance to the train, and it attracts attention due to the reference to a well-known piece of art. It is not common for trains to have such artwork on their outer surface, especially a representation of a famous painting.',
,    'type': 'text'}],
,  'role': 'assistant'}]

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support DPOTrainer and GRPOTrainer for reinforcement learning!!

We use our new UnslothVisionDataCollator which will help in our vision finetuning setup.

[ ]
max_steps is given, it will override any value given in num_train_epochs
[ ]
GPU = Tesla T4. Max memory = 14.748 GB.
8.031 GB of memory reserved.
[ ]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1
   \\   /|    Num examples = 8,552 | Num Epochs = 1
O^O/ \_/ \    Batch size per device = 1 | Gradient Accumulation steps = 4
\        /    Total batch size = 4 | Total steps = 30
 "-____-"     Number of trainable parameters = 18,677,760
🦥 Unsloth needs about 1-3 minutes to load everything - please wait!
[ ]
963.6424 seconds used for training.
16.06 minutes used for training.
Peak reserved memory = 12.643 GB.
Peak reserved memory for training = 4.612 GB.
Peak reserved memory % of max memory = 85.727 %.
Peak reserved memory for training % of max memory = 31.272 %.

Inference

Let's run the model! You can change the instruction and input - leave the output blank!

We use min_p = 0.1 and temperature = 1.5. Read this Tweet for more information on why.

[ ]
Expanding inputs for image tokens in LLaVa should be done in processing. Please add `patch_size` and `vision_feature_select_strategy` to the model's processing config or set directly with `processor.patch_size = {{patch_size}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. Using processors without these attributes in the config is deprecated and will throw an error in v4.47.
Yes, there is something interesting about this image. It shows a creative and eye-catching design on the side of a vehicle, likely a trailer or a large truck, featuring a stylized depiction of a rocket with wings and the words "Space Force" written on it. This design is visible from the perspective of someone

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Hugging Face's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]
['lora_model/processor_config.json']

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]
The image shows a train traveling through a rural area with a tall tower in the background.</s>

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.

[ ]

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
  2. Learn how to do Reinforcement Learning with our RL Guide and notebooks.
  3. Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
  4. Explore our LLM Tutorials Directory to find dedicated guides for each model.
  5. Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.

Join Discord if you need help + ⭐️ Star us on Github ⭐️

This notebook and all Unsloth notebooks are licensed LGPL-3.0