aware2all

Situation-aware multimodal interaction in increasingly autonomous cars

Situation-aware multimodal interaction in cars is about developing interfaces and systems that can adapt and react to the dynamic context of the driving environment. This approach recognizes that the optimal way in which a driver interacts with in-vehicle systems may depend on factors such as traffic conditions, weather, the driver's condition and other situational aspects. By integrating multiple types of interaction modalities and taking the environmental context into account, the aim is to increase safety, reduce distraction and improve the overall driver experience.

In the context of autonomous vehicles, situation-aware multimodal interaction becomes even more important as the vehicle takes on more tasks traditionally performed by the driver. Autonomous vehicles must not only perceive and understand the environment, but also interact with passengers in a way that is both informative and reassuring. These challenges are being addressed in the AWARE2ALL project by CEA and other project partners to bridge the gap related to human-machine interfaces in increasingly autonomous vehicles.

Our work in AWARE2ALL aims to design efficient vehicle-to-driver/passenger/pedestrian communication, as well as efficient driving assistance. In the AWARE2ALL demonstrators, the communication with various mobility stakeholders will be facilitated through several cutting-edge multisensory technologies, including visual, audio, and haptic elements. These technologies aim to improve individuals' awareness of their surroundings, transportation mode, and potential risks by delivering appropriate and timely feedback. CEA will use an audio interface based on sound emitting panels as well as a previously developed seat cover providing haptic feedback to inform and warn the driver. The haptic interface consists of 12 vibration motors embedded in the foam of a removable seat cover. The proposed technologies will be used in both autonomous and semi-autonomous driving contexts.

CEA together with IRT SystemX will also test and improve existing formal models for the selection of the most appropriate communication channel(s) for warnings of gradual importance or for information provision. CEA explored such a model, called the Modality Suitability Prediction Model (MSPM), which will be further improved. The goal of the MSPM is to provide a suitability score (the outcome) for each combination of modalities, based on several variables (inputs). To build the first version of the model, we had to consider the available combinations of modalities. In terms of haptic feedback, we considered vibrations in the seat. For auditory signals, we considered audio sounds (non-verbal alarms) and speech (spoken instructions). For visual signals, we considered all the visualization options developed by other AWARE2ALL partners. The model also integrates the level of the criticality of the situation as a parameter.

Do you want more information? Please contact Christian Bolzmacher christian.bolzmacher@cea.fr

crossmenu