Whisper

unsloth-notebooksunslothoriginal_template

To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!

Join Discord if you need help + ⭐ Star us on Github

To install Unsloth on your own computer, follow the installation instructions on our Github page here.

You will learn how to do data prep, how to train, how to run the model, & how to save it

News

NEW Unsloth now supports training the new gpt-oss model from OpenAI! You can start finetune gpt-oss for free with our Colab notebook!

Unsloth now supports Text-to-Speech (TTS) models. Read our guide here.

Read our Gemma 3N Guide and check out our new Dynamic 2.0 quants which outperforms other quantization methods!

Visit our docs for all our model uploads and notebooks.

Installation

[1]
[2]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2025.8.10: Fast Whisper patching. Transformers: 4.51.3.
   \\   /|    Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.8.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.4.0
\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.32.post2. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.
model.safetensors:   0%|          | 0.00/3.09G [00:00<?, ?B/s]
generation_config.json: 0.00B [00:00, ?B/s]
preprocessor_config.json:   0%|          | 0.00/357 [00:00<?, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
vocab.json: 0.00B [00:00, ?B/s]
merges.txt: 0.00B [00:00, ?B/s]
normalizer.json: 0.00B [00:00, ?B/s]
added_tokens.json: 0.00B [00:00, ?B/s]
special_tokens_map.json: 0.00B [00:00, ?B/s]
unsloth/whisper-large-v3 does not have a padding token! Will use pad_token = <|endoftext|>.

We now add LoRA adapters so we only need to update 1 to 10% of all parameters!

[3]
Unsloth: Making `model.base_model.model.model.encoder` require gradients

Data Prep

We will use the MrDragonFox/Elise, which is designed for training TTS models. Ensure that your dataset follows the required format: text, audio. You can modify this section to accommodate your own dataset, but maintaining the correct structure is essential for optimal training.

[4]
README.md: 0.00B [00:00, ?B/s]
data/train-00000-of-00001.parquet:   0%|          | 0.00/328M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/1195 [00:00<?, ? examples/s]
Train split: 100%|██████████| 1123/1123 [00:47<00:00, 23.69it/s]
Test split: 100%|██████████| 72/72 [00:01<00:00, 50.11it/s]
[6]
Downloading builder script: 0.00B [00:00, ?B/s]

Train the model

Now let's use Huggingface Seq2SeqTrainer! More docs here: Transformers docs. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None.

[8]
/tmp/ipython-input-715647589.py:3: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `Seq2SeqTrainer.__init__`. Use `processing_class` instead.
  trainer = Seq2SeqTrainer(
[9]
GPU = Tesla T4. Max memory = 14.741 GB.
3.012 GB of memory reserved.
[12]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 1,123 | Num Epochs = 1 | Total steps = 60
O^O/ \_/ \    Batch size per device = 1 | Gradient accumulation steps = 4
\        /    Data Parallel GPUs = 1 | Total batch size (1 x 4 x 1) = 4
 "-____-"     Trainable parameters = 31,457,280 of 1,574,947,840 (2.00% trained)
Unsloth: Not an error, but WhisperForConditionalGeneration does not accept `num_items_in_batch`.
Using gradient accumulation will be very slightly less accurate.
Read more on gradient accumulation issues here: https://unsloth.ai/blog/gradient
Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.43.0. You should pass an instance of `EncoderDecoderCache` instead, e.g. `past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`.
`use_cache = True` is incompatible with gradient checkpointing. Setting `use_cache = False`...
Unsloth: Will smartly offload gradients to save VRAM!
[13]
422.5309 seconds used for training.
7.04 minutes used for training.
Peak reserved memory = 10.348 GB.
Peak reserved memory for training = 7.336 GB.
Peak reserved memory % of max memory = 70.199 %.
Peak reserved memory for training % of max memory = 49.766 %.

Inference

Let's run the model! Because we finetuned Whisper for speech recognition, we need to have a audio file.

For example we use the Harvard Sentences audio dataset https://en.wikipedia.org/wiki/Harvard_sentences

[14]
--2025-08-30 12:08:09--  https://upload.wikimedia.org/wikipedia/commons/5/5b/Speech_12dB_s16.flac
Resolving upload.wikimedia.org (upload.wikimedia.org)... 198.35.26.112, 2620:0:863:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|198.35.26.112|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 495373 (484K) [audio/x-flac]
Saving to: ‘Speech_12dB_s16.flac’

Speech_12dB_s16.fla 100%[===================>] 483.76K  --.-KB/s    in 0.09s   

2025-08-30 12:08:09 (5.32 MB/s) - ‘Speech_12dB_s16.flac’ saved [495373/495373]

[15]
Device set to use cuda:0
/usr/local/lib/python3.12/dist-packages/transformers/models/whisper/generation_whisper.py:573: FutureWarning: The input name `inputs` is deprecated. Please make sure to use `input_features` instead.
  warnings.warn(
`generation_config` default values have been modified to match model-specific defaults: {'suppress_tokens': [1, 2, 7, 8, 9, 10, 14, 25, 26, 27, 28, 29, 31, 58, 59, 60, 61, 62, 63, 90, 91, 92, 93, 359, 503, 522, 542, 873, 893, 902, 918, 922, 931, 1350, 1853, 1982, 2460, 2627, 3246, 3253, 3268, 3536, 3846, 3961, 4183, 4667, 6585, 6647, 7273, 9061, 9383, 10428, 10929, 11938, 12033, 12331, 12562, 13793, 14157, 14635, 15265, 15618, 16553, 16604, 18362, 18956, 20075, 21675, 22520, 26130, 26161, 26435, 28279, 29464, 31650, 32302, 32470, 36865, 42863, 47425, 49870, 50254, 50258, 50359, 50360, 50361, 50362, 50363], 'begin_suppress_tokens': [220, 50257]}. If this is not desired, please set these values explicitly.
A custom logits processor of type <class 'transformers.generation.logits_process.SuppressTokensLogitsProcessor'> has been passed to `.generate()`, but it was also created in `.generate()`, given its parameterization. The custom <class 'transformers.generation.logits_process.SuppressTokensLogitsProcessor'> will take precedence. Please check the docstring of <class 'transformers.generation.logits_process.SuppressTokensLogitsProcessor'> to see related `.generate()` flags.
A custom logits processor of type <class 'transformers.generation.logits_process.SuppressTokensAtBeginLogitsProcessor'> has been passed to `.generate()`, but it was also created in `.generate()`, given its parameterization. The custom <class 'transformers.generation.logits_process.SuppressTokensAtBeginLogitsProcessor'> will take precedence. Please check the docstring of <class 'transformers.generation.logits_process.SuppressTokensAtBeginLogitsProcessor'> to see related `.generate()` flags.
 The birch canoe slid on the smooth planks. Glued the sheet to the dark blue background. It's easy to tell the depth of a well. Four hours of steady work faced us.

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[16]
[]

Saving to float16

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Train your own reasoning model - Llama GRPO notebook Free Colab
  2. Saving finetunes to Ollama. Free notebook
  3. Llama 3.2 Vision finetuning - Radiography use case. Free Colab
  4. See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!

Join Discord if you need help + ⭐️ Star us on Github ⭐️