Minister for Digitalisation and Head of the State Chancellery of the
State of Schleswig-Holstein
PROGRAM
Monday - July 7, 2025
18:00Welcome with Patron Dirk Schrödter
18:20Keynote: To Be Announced – Stay Tuned for a Special Guest!
19:00Get-Together with Food and Music
Tuesday - July 8, 2025
08:30Breakfast
09:00Morning Welcome
10:30Workshop: To Be Announced – Exciting Sessions Coming Soon!
13:00Lunch Break
14:00Workshop: Introduction to LeRobot // by Jeremy von Winckelmann
16:00Workshop: To Be Announced – More Surprises Await!
18:00Reception with Food / Bar Opening
Wednesday - July 9, 2025
08:30Breakfast
09:00Morning Welcome
10:30Workshop: My Model, My Rules: Self-Hosting Large Language Models // by Henrik Horst & Fynn Junge
13:00Lunch Break
15:00Workshop: Hands-on with Open Web UI: Build, Explore Interact (TBC) // by Marius Heine (Geprog)
18:00Reception with Food / Bar Opening
Thursday - July 10, 2025
08:30Breakfast
09:00Morning Welcome
10:30Workshop: To Be Announced – Federated Learning? AI in Production? Find Out Soon!
13:00Lunch Break
14:00Workshop: To Be Announced – More Details Coming!
16:00Project Presentations: Results from the Machine Learning with TensorFlow Course
18:00Reception with Food / Bar Opening
Friday - July 11, 2025
08:30Breakfast
09:00Morning Welcome
11:00Start of Project Presentations, Part I
13:00Lunch Break
14:00Start of Project Presentations, Part II
16:00Closing
PROJECTS
Dynamic Object Interaction with a Boston Dynamics Spot Robot
Prof. Dr. Sören Pirk - Christian-Albrechts-University of Kiel
The goal of the project is to build a dynamic mobile manipulation platform by integrating a robotic arm with the Boston Dynamics Spot robot. We aim to enable the robot to autonomously navigate and interact with its environment. The system is supposed to perform tasks like pointing at landmarks or picking objects. The project may also incorporate language-based interaction, allowing users to control or query the robot through natural language commands.
We're building "Look'n'Learn AI," an intuitive visual inspection tool that learns from user feedback (OK/NOT OK). It uses local, open-source AI to analyze images/video, allowing users to teach the system their specific quality criteria through simple interactions and chat. Our goal is to create a self-improving visual assistant that adapts to various inspection tasks without needing pre-labeled datasets.
Fine-Tuning Whisper for Sign Language Subtitling
Steffen Brandt - Opencampus
This project fine-tunes a video-adapted Whisper model on sign language data using Google Cloud Vertex AI to generate accurate subtitles and enhance accessibility.
More Exciting Projects Coming Soon!
Your Name Here - Your Institution
Project descriptions will be added as participants submit their proposals. Stay tuned for innovative AI and machine learning projects!