Dynamic Object Interaction with a Boston Dynamics Spot Robot
Prof. Dr. Sören Pirk - University of Kiel
Anton Wagner - University of Kiel
The goal of the project is to build a dynamic mobile manipulation platform by integrating a robotic arm with the Boston Dynamics Spot robot. We aim to enable the robot to autonomously navigate and interact with its environment. The system is supposed to perform tasks like pointing at landmarks or picking objects. The project may also incorporate language-based interaction, allowing users to control or query the robot through natural language commands.
Semiformal AI Prompting: Eliminating AI Guesswork
Daniel Dietrich - Typefox
Soulaymen Chouri - Typefox
Based on an example web application generator, we’ll explore how structured syntax produces more precise results compared to ambiguous natural language prompts. Use a playground to create full-stack web applications of any kind while experimenting with the DSL approach. There’s room for thinking about advanced applications that go beyond web app generation...
Look'n'Learn AI: Interactive Visual Inspection Assistant
Johannes Köpnick
Eugen Herrmann
Bernd Lundt
Stefan Müller
We're building "Look'n'Learn AI," an intuitive visual inspection tool that learns from user feedback (OK/NOT OK). It uses local, open-source AI to analyze images/video, allowing users to teach the system their specific quality criteria through simple interactions and chat. Our goal is to create a self-improving visual assistant that adapts to various inspection tasks without needing pre-labeled datasets.
Coin Raiders: Looting Lara’s Purse for Science
Christopher Hansen - University of Kiel
Ancient coins are tiny time machines — packed with rich history, powerful imagery, and hidden stories. But here’s the kicker: despite decades of research, it's still an open challenge to automatically describe and locate what's on these coins — be it emperors, animals, gods, weapons, or cryptic inscriptions. Build a system that not only tells the story of each coin but also shows where each element is — from portraits to legends. You can go wild with segmentation models, vision-language models, or multi-modal transformers. We’re open to fresh, unconventional ideas!
Fine-Tuning Whisper for Sign Language Subtitling
Steffen Brandt - opencampus.sh
This project fine-tunes a video-adapted Whisper model on sign language data using Google Cloud Vertex AI to generate accurate subtitles and enhance accessibility.
Cook Smarter with Personalized AI Recipes
Mark Knaub
Andreas Schmalzel
Pascal Jüngling
Ronny Barthel
Sebastian Dobrowolski
(Student Project)
This app makes cooking simple and tailored to you. Browse a wide range of recipes, and let the AI guide you through each step—adjusted to your skill level. Easily customize any dish to match your dietary needs or preferences, including allergies and ingredient swaps.
In-House Navigation with 3D Mapping & AI
Ali Farooq - GEPROG GmbH
Marius Heine - GEPROG GmbH
Shahwar Zia - GEPROG GmbH
This project aims to extend conventional navigation capabilities by introducing 3D indoor navigation using augmented visualization over Google Maps and AI-generated indoor spaces. The application will enable users to navigate within large indoor environments such as malls, hospitals, airports, or office buildings e.g. by asking "Show me the cafeteria" or "Take me to the nearest exit," and the system will highlight the best route to your destination directly within a 3D model of the building. Key Features: AI-generated 3D models of building interiors based on floor plans or textual descriptions, real-time or simulated 3D path rendering over indoor maps, compatibility with GPS and indoor positioning systems (IPS) for accurate location tracking, voice-guided navigation and AR view mode for immersive user experience.
More Exciting Projects Coming Soon!
Your Name Here - Your Institution
Project descriptions will be added as participants submit their proposals. Stay tuned for innovative AI and machine learning projects!