Imagine, you are driving a fully automated vehicle on the highway, destination: holidays in the South of France. The vehicle is continuously driving in automated mode, already for hours, so you are leaning backwards in your chair, enjoying the sun on your face and listening to a very interesting podcast. You feel totally relaxed and the only thing on your mind is what you will do when you arrive at your destination. But then, very suddenly, the automated car cannot handle the situation anymore and you have to take over control. The car starts warning you that you have to take over within a few seconds. This, in automated driving, is expected to become one of the most safety critical moments: the moment automation hands back control to the driver.
UN Regulation 157 on the ‘Uniform provisions concerning the approval of vehicles with regard to Automated Lane Keeping Systems’ states that ‘The system shall detect if the driver is available and in an appropriate driving position to respond to a transition demand by monitoring the driver. ‘ The driver is deemed available and attentive when the driver is primarily looking at the road ahead. However, primarily looking at the forward roadway does not mean the driver is aware of the whole traffic situation. On the contrary, the driver might have missed all other traffic around him/her. In Aware2All, we want to know what the situational awareness of the driver really is. That means, we want to estimate whether the driver is aware of the environment, including e.g. other traffic participants important for the driving task. This starts by defining which actors are actually of importance for the task at hand, which is dependent on the action the driver is going to perform. Is a take-over request triggered by an upcoming highway exit? If so, the driver will have to change lanes in order to take the exit and therefore the vehicles behind the ego vehicle and those in the right lane are of importance for the driver in the near future. Whether the driver has actually seen these vehicles, understands what they are doing and is able to predict which actions they are going to perform, is called the current situational awareness of the driver. In Aware2All we focus on the first level of Situation Awareness, which is the perception layer. So, for now, our research is concentrating on the question whether the driver has perceived the aforementioned other vehicles in the scene. The required situational awareness is tracked using different regions around the ego-vehicle and whether the driver is looking at these regions. Hence, they represent the (un)certainty the driver has about the state of the objects in those regions. The basic idea is that when driver gaze is directed towards one of those objects, an attentional buffer related to that region increases in value, while the buffers of other regions decrease. This idea is similar to the attention buffers of AttenD2.01. When all of the buffers are larger than some minimum value, the driver has sufficient situational awareness. However, when a buffer reaches zero, the driver is thought not to be aware anymore of where and how objects in this region are behaving. Another look at this region is required again, to increase awareness of the current situation. When at least one of the buffers is empty, the driver is deemed insufficiently aware to take over the driving task and the vehicle may need to help the driver to regain this situational awareness before he/she can have safe control over the car again.
A situational awareness estimation could not only help automated systems with take-over decision making, but also help to guide the driver to (re)gain situational awareness. When the estimation is visualized on an HMI, it can direct the driver to unseen vehicles. In this way, we aim to increase safety of transition of control and decrease the number of collisions due to driver inattentiveness.
1 Towards a Context-Dependent Multi-Buffer Driver Distraction Detection Algorithm | IEEE Journals & Magazine | IEEE Xplore