LLM for private data

Europe/Warsaw
online

online

Description

This training focuses on the possibilities of utilizing large language models (LLMs) for interacting with private data. It introduces tools that harness the generative power of artificial intelligence in scenarios where data cannot leave the private environment at any point. The training covers the architecture and data requirements for creating a private ChatGPT, leveraging an understanding of the semantics of defined problems while maintaining control over private data.

Requirements

Participants' computers should be equipped with software that allows SSH connections.

Venue

The workshop will be conducted online on the Zoom platform. The meeting link will be sent to registered participants.

Language

English/Polish - dependent on the participants.

Registration
Registration
25 / 25
    • 1
      Overview of Large Language Models (LLMs)

      Introduction to LLMs and their capabilities in text generation and natural language processing.

    • 2
      Transformers

      Discussion of transformer architecture, which forms the basis for LLMs.
      Presentation of the attention mechanism as a key element in LLMs, explaining how it allows models to selectively focus on specific information.
      Examples of attention model usage in LLMs.

    • 3
      Various Methods for Customizing LLMs

      Introduction to different techniques used to customize LLMs for specific tasks, domains, and data.

    • 12:00
      Lunch break
    • 4
      LLM Fine-Tuning

      The process of further training pre-trained LLMs on specific data.
      Utilizing fine-tuning to adapt the model for specific applications (practical exercises).

    • 5
      Retrieval Augmented Generation (RAG)

      Explanation of the RAG approach, which combines text generation with information retrieval from an internal document set.
      Practical examples of RAG usage.
      Examples of advanced techniques to enhance RAG performance (practical exercises).