Falcon H1 (0.5B) Alpaca
Unsloth Training for Falcon H1
This Notebook has been authored by TII Falcon Team. For more details on Falcon H1 series of models :
To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your own computer, follow the installation instructions on our Github page here.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Placeholder
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. ==((====))== Unsloth 2024.8: Fast Gemma2 patching. Transformers = 4.43.3. \\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.3.1+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. FA [Xformers = 0.0.26.post1. FA2 = False] "-____-" Free Apache license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors: 0%| | 0.00/2.22G [00:00<?, ?B/s]
generation_config.json: 0%| | 0.00/168 [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/46.3k [00:00<?, ?B/s]
tokenizer.model: 0%| | 0.00/4.24M [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/555 [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/17.5M [00:00<?, ?B/s]
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
Unsloth 2024.8 patched 26 layers with 26 QKV layers, 26 O layers and 26 MLP layers.
Data Prep
We now use the Alpaca dataset from yahma, which is a filtered version of 52K of the original Alpaca dataset. You can replace this code section with your own data prep.
[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.
[NOTE] Remember to add the EOS_TOKEN to the tokenized output!! Otherwise you'll get infinite generations!
If you want to use the llama-3 template for ShareGPT datasets, try our conversational notebook
For text completions like novel writing, try this notebook.
Downloading readme: 0%| | 0.00/11.6k [00:00<?, ?B/s]
Downloading data: 0%| | 0.00/44.3M [00:00<?, ?B/s]
Generating train split: 0%| | 0/51760 [00:00<?, ? examples/s]
Map: 0%| | 0/51760 [00:00<?, ? examples/s]
Map (num_proc=2): 0%| | 0/51760 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
GPU = Tesla T4. Max memory = 14.748 GB. 2.697 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 51,760 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4 \ / Total batch size = 8 | Total steps = 60 "-____-" Number of trainable parameters = 20,766,720
278.0636 seconds used for training. 4.63 minutes used for training. Peak reserved memory = 7.684 GB. Peak reserved memory for training = 4.987 GB. Peak reserved memory % of max memory = 52.102 %. Peak reserved memory for training % of max memory = 33.815 %.
['<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nContinue the fibonnaci sequence.\n\n### Input:\n1, 1, 2, 3, 5, 8\n\n### Response:\nThe fibonnaci sequence is a sequence of numbers where each number is the sum of the two preceding ones. The sequence is defined as follows:\n\n1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ']
You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!
<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Continue the fibonnaci sequence. ### Input: 1, 1, 2, 3, 5, 8 ### Response: The fibonnaci sequence is a sequence of numbers where each number is the sum of the two preceding ones. The sequence is defined as follows: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 1771
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/tokenizer.model',
, 'lora_model/added_tokens.json',
, 'lora_model/tokenizer.json') Now if you want to load the LoRA adapters we just saved for inference, set False to True:
<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: What is a famous tall tower in Paris? ### Input: ### Response: The Eiffel Tower is a famous tall tower in Paris, France. It is located in the 5th arrondissement of Paris and is one of the most recognizable landmarks in the world. The tower was built for the 1889 World's Fair and is 324 meters tall. It is made of iron and has 1,665 steps. The tower is a symbol of Paris and is a popular tourist attraction.<eos>
You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our Wiki page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[NEW] To finetune and auto export to Ollama, try our Ollama notebook


