From the course: LLaMa for Developers

Unlock the full course today

Join today to access over 23,100 courses taught by industry experts.

Optimizing LLaMA prompts with DSPy

Optimizing LLaMA prompts with DSPy - Llama Tutorial

From the course: LLaMa for Developers

Optimizing LLaMA prompts with DSPy

- [Instructor] Now we've learned to prompt with LLaMa, but is there a way to better optimize our prompts? In this video, we're going to cover how we can optimize our LLaMa prompts with a framework called DSPY. So let's get started under branch 05_05 So to run this code, I used an A100 GPU. You can use a T4, but it would take a very long time. So let's walk through this notebook. Let's start off with defining what DSPY is. DSPY is a prompting framework meant to abstract LLM programs. We can take a look on their GitHub page. Let's click on that and scroll down. Now, DSPY came out of a research lab in Stanford showing that you can get very good results on prompts by automating their generation. Let's click on the paper. This DSPY paper is about building self-improving pipelines, and one of the models that they use is LLaMa two 13 Billion Parameter Chat. So let's scroll down. And the conclusion of the paper is you can dramatically improve your prompting between 16 and 40%. So let's go…

Contents