Overview
The evolution and adoption of large language models (LLMs) have been nothing short of revolutionary, with retrieval-based systems at the forefront of this technological leap. These models are not just tools for automation; they are partners in enhancing productivity, capable of holding informed conversations by interacting with a vast array of tools and documents. This course is designed for those eager to explore the potential of these systems, focusing on practical deployment and the efficient implementation required to manage the considerable demands of both users and deep learning models. As we delve into the intricacies of LLMs, participants will gain insights into advanced orchestration techniques that include internal reasoning, dialog management, and effective tooling strategies.
By the end of this workshop, participants will have a solid understanding of RAG agents and the tools necessary to develop their own LLM applications, and have the opportunity to earn a certificate to demonstrate subject matter competency and support professional career growth.
Please note this is a 2-day event, your participation is required on both days.
Topics Covered
The workshop includes topics such as LLM Inference Interfaces, Pipeline Design with LangChain, Gradio, and LangServe, Dialog Management with Running States, Working with Documents, Embeddings for Semantic Similarity and Guardrailing, and Vector Stores for RAG Agents. Each of these sections is designed to equip participants with the knowledge and skills necessary to develop and deploy advanced LLM systems effectively.
Learning Objectives:
The goal of the course is to teach participants how to:
Prerequisites:
Language: English
Level: Technical - Intermediate
Instructors: Maciej Wielgosz and Szymon Mazurek, Cyfronet
Venue: The workshop will be conducted online on the Zoom platform. The meeting link will be sent to registered participants.
Price: Free