Hands-on LLM Course
Learn to train and deploy a Financial Advisor with real-time LLMs
In this tutorial you will design, build and deploy a financial advisor using LLMs and MLOps best-practices.
With this hands-on tutorial, we want to help you go beyond LangChain demos in Jupyter notebooks, and start building real-world ML products using LLMs.
This hands-on FREE course is brought to you by Paul Iusztin, Alexandru Răzvanț and Pau Labarta Bajo.
1. Intro to the course
🎬 Video lecture
This project is not just a demo, but a fully working product that combines the latest advancements in LLMs with established MLOps design patterns.
2. How to fine tune an open-source LLM
🎬 Video lecture
Learn how to take an open-source LLM and fine tune it for your specific task and dataset.
3. Build the fine-tuning pipeline
👨🏽💻Source code | 🎬 Video lecture
In this lecture you will fine tune an open-source LLM (Falcon 7B) using open-source libraries and Beam serverless computing platform.
4. Build the real-time feature pipeline
👨🏽💻Source code | 🎬 Video lecture
In this lecture you will how to design, build and deploy a real-time data pipeline to transform financial news into vector embeddings, using Bytewax and Qdrant.
5. Build the inference pipeline
👨🏽💻Source code | 🎬 Video lecture
Let's deploy the final agent using a Serverless API wityh Beam, Qdrant VectorDB and LangChain.