The SEMULIN project

The goal of the SEMULIN project (Self-supporting Multimodal Interaction) is to develop a self-supporting, natural, and consistent human-machine interface for vehicles driving in automated mode using multimodal input/output concepts.

Major challenges involved are to identify the human’s intended actions, their interpretation, and the resulting derivation of actions for the machine.

To this end, three core innovations are to be implemented:

The natural HMI results in the on-demand combination of the input/output modalities. In this context, the key design criterion for the user-vehicle interface is meaningful multimodal interaction in the form of the most natural use possible.

In terms of the intelligence cluster, machine learning and artificial intelligence methods are linked to models and methods from psychology. This serves the purpose of identifying intended actions and the reactions, as close to expectations as possible, to be derived from them.

In the technology stack, the input/output modalities are fused in a specialized central unit. The input modalities are closely linked to the respective sensors, whose functions were extended as part of the project, for the emotion, speech, speaker, gaze direction, and gesture modalities.

SEMULIN follows an evolutionary, iterative project approach for the review and improvement of demonstrators and simulators. A use case story that serves as a guideline for the overall project is created based on a use case catalog. This use case story is implemented in driving simulators and in the final SEMULIN demonstrator. A reference collection process is followed by an empirical final evaluation. Usability and ultimately user satisfaction are determined by combining simulator measurements, standardized questionnaires, and individual interviews.