Description
Introduction to the workshop and setting up the environment.
Exploration of LLM inference interfaces and microservices.
Designing LLM pipelines using LangChain, Gradio, and LangServe.
Managing dialog states and integrating knowledge extraction.