Fine-Tuning LLM Series (Part 2): Fine-Tuning with Ray & DeepSpeed ZeRO

Fine-Tuning LLM Series (Part 2): Fine-Tuning with Ray & DeepSpeed ZeRO

Join Domino Data Lab for our recurring Customer Tech Hour series. In this session, we'll be reviewing optimizing LLMs with LoRA & Quantization

rate limit

Code not recognized.

About this course

Breaking GenAI Barriers Series, Part 2: Fine-Tuning LLMs with Ray and DeepSpeed ZeRO

Breaking GenAI Barriers with Efficient Fine-Tuning Techniques

In this series, you'll uncover the future of smart fine-tuning in Generative AI! Join our 2-part webinar series as we show you how to leverage the power of LLMs faster and without breaking the bank using parameter-efficient fine-tuning. Learn how LoRA, ZeRO, quantization, and more are revolutionizing LLM adaptation. Don't miss out on unlocking the potential of large language models!

In Part 2 of this Series, we'll explore fine-tuning with Ray & DeepSpeed ZeRO.

One of the biggest challenges with LLM fine-tuning involves model storage in memory. Even with PEFT, models need to be available in GPU memory while training. That is where ZeRO techniques come into play, improving training efficiency using multiple GPUs and clusters. Join us to unravel the secrets of fine-tuning GPTJ-6b using Ray and DeepSpeed ZeRO. The webinar will offer a first-hand look into how to approach this advanced training strategy.

About this course

Breaking GenAI Barriers Series, Part 2: Fine-Tuning LLMs with Ray and DeepSpeed ZeRO

Breaking GenAI Barriers with Efficient Fine-Tuning Techniques

In this series, you'll uncover the future of smart fine-tuning in Generative AI! Join our 2-part webinar series as we show you how to leverage the power of LLMs faster and without breaking the bank using parameter-efficient fine-tuning. Learn how LoRA, ZeRO, quantization, and more are revolutionizing LLM adaptation. Don't miss out on unlocking the potential of large language models!

In Part 2 of this Series, we'll explore fine-tuning with Ray & DeepSpeed ZeRO.

One of the biggest challenges with LLM fine-tuning involves model storage in memory. Even with PEFT, models need to be available in GPU memory while training. That is where ZeRO techniques come into play, improving training efficiency using multiple GPUs and clusters. Join us to unravel the secrets of fine-tuning GPTJ-6b using Ray and DeepSpeed ZeRO. The webinar will offer a first-hand look into how to approach this advanced training strategy.