Skip to content

Fine Tuning

Llama fine tuning

This examples fine-tunes Llama 2 on a Q&A list.

Problem

  • Name: Llama Finetuning
  • Problem Type: text_causal_classification_modeling
  • Model Source: HF
  • Model Name: meta-llama/Llama-2-7b-hf
  • Secrets Blueprint: HF Meta

problem

Dataset

dataset

Output storage

  • Model Storage: HF
  • Model Name: llama-finetuned
  • Store only the LoRA Adapters: true
  • Secrets Blueprint: Write Token
    • A token with write access to HF

Run

  • Run Title: Run 01
  • Resources
    • Accelerator: A10G
    • GPU Count: 1
    • Memory: 64
  • Tracking
    • Experiment name: Llama Finetuning
    • API key: API key generated from User > API Keys dropdown
    • Tracking mode: after_epoch