Fine-Tuning LLM Series (Part 1): Optimize LLMs with LoRA & Quantization
Join Domino Data Lab for our recurring Customer Tech Hour series. In this session, we'll be reviewing optimizing LLMs with LoRA & Quantization
Breaking GenAI Barriers Series, Part 1: Unleashing LLMs with Quantization and LoRA
Breaking GenAI Barriers with Efficient Fine-Tuning Techniques
In this series, you'll uncover the future of smart fine-tuning in Generative AI! Join our 2-part webinar series as we show you how to leverage the power of LLMs faster and without breaking the bank using parameter-efficient fine-tuning. Learn how LoRA, ZeRO, quantization, and more are revolutionizing LLM adaptation. Don't miss out on unlocking the potential of large language models!
In Part 1 of this Series, we'll explore optimization techniques of quantization and LoRA, by reviewing the following:
- Explore PEFT (parameter-efficient fine-tuning) techniques for Falcon-7b LLM using LoRA and PyTorch Lightning
- Discover the power of quantization and the Huggingface Trainer on Domino, using Falcon-40b