Multimodal Spatial Interfaces for Human-AI Collaboration
This project explores how we can use multimodal AR/VR interfaces—like gaze, hand gestures, and voice—to improve how people interact and collaborate with AI systems. The goal is to move beyond typing and clicking, and toward more natural, spatial interaction. I’ll investigate how these interfaces affect prompt efficiency, real-world task support, and complex visualizations. The aim is to reduce the friction of AI use and make it more intuitive, especially in hands-busy or context-rich environments.
What drew me to HCI is its human focus. No matter the technology, it's people who experience it, and their needs should come first. I believe we shouldn't build systems just because we can—we should build them to serve people. HCI provides a framework for making technology more useful, accessible, and aligned with how humans actually think and behave. That’s what continues to motivate my work.
A key challenge is integrating multiple input channels—like gaze, speech, and gestures—into interfaces that feel intuitive and responsive. AI must interpret these signals in real-time and adapt to context, which involves technical, design, and interaction challenges. The project also contributes to the broader question of how to align AI behavior with human intent and control, pushing toward more usable and trustworthy systems.
In the long term, the project can help make AI systems more accessible, intuitive, and easier to work with. By reducing reliance on complex commands and allowing for more natural interaction, the research can open up AI use to broader audiences. It supports a shift toward technology that is more understandable, responsive, and centered on human needs—contributing to more meaningful and inclusive digital experiences.
Sapere Aude would give me the freedom to push this research direction in a serious and focused way. It’s a chance to build a strong team, run ambitious experiments, and take real leadership in shaping how we think about human-AI collaboration. It would also help connect my work with leading researchers globally and allow me to invest in new ideas that don’t easily fit into more traditional funding paths.
In my past, I was a gamer—playing a lot of games like Counter-Strike and World of Warcraft. That naturally got me interested in computers. But it was human-computer interaction that really caught my attention. It’s all about exploring new ways we can interact with computers, and making things feel intuitive and usable—just like in any good game or interactive system. That mix of creativity and usability still drives how I approach research today.
Aarhus University
Human-Computer Interaction / Multimodal Interfaces / AI
Aarhus
Lessing Gymnasium Dusseldorf