Notebooks
U
Unsloth
Deepseek OCR 2 (3B)

Deepseek OCR 2 (3B)

unsloth-notebooksunslothnb

To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!

Join Discord if you need help + ⭐ Star us on Github

To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.

You will learn how to do data prep, how to train, how to run the model, & how to save it

News

Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog

You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog

Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog

3x faster LLM training with 30% less VRAM and 500K context. 3x faster500K Context

New in Reinforcement Learning: FP8 RLVision RLStandbygpt-oss RL

Visit our docs for all our model uploads and notebooks.

Installation

[1]

Unsloth

Let's prepare the OCR model to our local first

[2]
/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning: 
The secret `HF_TOKEN` does not exist in your Colab secrets.
To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.
You will be able to reuse this secret in all of your notebooks.
Please note that authentication is recommended but still optional to access public models or datasets.
  warnings.warn(
Fetching 16 files:   0%|          | 0/16 [00:00<?, ?it/s]
'/content/deepseek_ocr2'
[3]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR2. This is not supported for all configurations of models and can yield errors.
Unsloth: WARNING `trust_remote_code` is True.
Are you certain you want to do remote code execution?
==((====))==  Unsloth 2026.1.4: Fast Deepseekocr2 patching. Transformers: 4.56.2.
   \\   /|    Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.9.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.5.0
\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.33.post1. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR2. This is not supported for all configurations of models and can yield errors.
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR2. This is not supported for all configurations of models and can yield errors.
Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.

Let's Evaluate Deepseek-OCR Baseline Performance on Persian Transcription

[4]
[5]
[6]
Output
[7]
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
حفظ السلام
===============save results:===============
image: 0it [00:00, ?it/s]
other: 0it [00:00, ?it/s]
[8]
'انضباطمم شدم حقیقتن باورم نمیشد'

Baseline Model Performance: 23% Character Error Rate (CER) for this sample !

Let's finetune Deepseek-OCR !

We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.

[NEW] We also support finetuning ONLY the vision part of the model, or ONLY the language part. Or you can select both! You can also select to finetune the attention or the MLP layers!

[9]
Unsloth: Making `model.base_model.model.model` require gradients

Data Prep

We'll be using a dataset for Persian OCR. The goal is to convert these images into a computer readable form - ie text. This can be very useful for digitizing Persian text.

You can access the dataset here.

To format the dataset, all vision finetuning tasks should be formatted as follows:

	[
{ "role": "<|User|>",
  "content": "",
  "images": []
},
{ "role": "<|Assistant|>",
  "content": ""
},
]

[10]

Let's convert the dataset into the "correct" format for finetuning:

[11]

We look at how the conversations are structured for the first example:

[12]
{'messages': [{'role': '<|User|>',
,   'content': '<image>\nFree OCR. ',
,   'images': [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=218x48>]},
,  {'role': '<|Assistant|>', 'content': 'همهاش جبره و اختیار توهمه'}]}
[13]

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support DPOTrainer and GRPOTrainer for reinforcement learning!!

We use our new DeepSeekOCR2DataCollator which will help in our vision finetuning setup.

[14]
/tmp/ipython-input-1258209068.py:12: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `Trainer._unsloth___init__`. Use `processing_class` instead.
  trainer = Trainer(
[15]
GPU = Tesla T4. Max memory = 14.741 GB.
6.838 GB of memory reserved.
[16]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 1,000 | Num Epochs = 1 | Total steps = 60
O^O/ \_/ \    Batch size per device = 2 | Gradient accumulation steps = 4
\        /    Data Parallel GPUs = 1 | Total batch size (2 x 4 x 1) = 8
 "-____-"     Trainable parameters = 86,307,840 of 3,475,427,200 (2.48% trained)
Unsloth: Not an error, but DeepseekOCR2ForCausalLM does not accept `num_items_in_batch`.
Using gradient accumulation will be very slightly less accurate.
Read more on gradient accumulation issues here: https://unsloth.ai/blog/gradient
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR2. This is not supported for all configurations of models and can yield errors.
[17]
744.5682 seconds used for training.
12.41 minutes used for training.
Peak reserved memory = 8.367 GB.
Peak reserved memory for training = 1.529 GB.
Peak reserved memory % of max memory = 56.76 %.
Peak reserved memory for training % of max memory = 10.372 %.

Inference

Let's run the model!

[18]
انضباطم شدم حقیقتن باورم نمیشه
===============save results:===============
image: 0it [00:00, ?it/s]
other: 0it [00:00, ?it/s]

With only 60 steps, we dramatically improved the transcription quality. The Character Error Rate (CER) on this single sample dropped from 23% to 6%, a 74% relative reduction!

TypeOCR
Baseline (Pre-Finetune)انضباطم نندم حقيقتن باورم نميتند
Finetuned (60 steps)انضباطم نشدم حقیقتن باورم نمیشد
Ground Truthانضباطمم شدم حقیقتن باورم نمیشد

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Hugging Face's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[19]
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR2. This is not supported for all configurations of models and can yield errors.
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/tokenizer.json')

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[20]
انضباطم شدم حقیقتن باورم نمیشه
===============save results:===============
image: 0it [00:00, ?it/s]
other: 0it [00:00, ?it/s]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.

[21]

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Train your own reasoning model - Llama GRPO notebook Free Colab
  2. Saving finetunes to Ollama. Free notebook
  3. Llama 3.2 Vision finetuning - Radiography use case. Free Colab
  4. See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!

Join Discord if you need help + ⭐️ Star us on Github ⭐️

This notebook and all Unsloth notebooks are licensed LGPL-3.0