One of the great challenges in analyzing driver experiments is the data complexity. Besides eye movements, there is a lot of other sensor data from the car like steering wheel angles, pedal activities and information from distance sensors and object recognition systems. Additionally, there are log files about the interaction between drivers and the interieur and context information about the traffic situation.
With our solution Blickshift Analytics you can analyze this data with an previously unknown efficiency. You get an efficient overview about your data, you can find quickly relevant time sections and identify participants with a similiar eye movement and driving behaviour. Analysis tasks, which required a time effort of days or weeks in the past, can now be done within a few hours!
Most times, second tasks are performed in driving experiments by the participants. A common analysis task is to identifiy common eye movment behaviors before an “event” during the second task. Or, a complete task block has to be analyzed in a high level of detail and has to be compared with other task blocks in the experiment.
With markings, an interactive section of data, you can analyze your data with respect to different conditions. For example, you can concentrate your analysis on the eye movement behavior before the drivers pass a crossing, which is labeled in the data. Interactively, you can test different time duration for marking the data. With automatic components you quickly compare the eye movements over many participants.
Driver models are the basis for advanced driver assistance systems (ADAS). A critical step during the development of driver models is to find optimal model parameters. For example, several fixations are required to perceive a pedestrian with a high probability. The exact number of fixations or an interval for the number of fixations is specified by driving experiments.
For the parametrization of driver models, you can use a combination of different visualizations and automatic components in Blickshift Analytics. However, the workflow is very simple: you only have to connect the necessary visualizations with the automatic components in Blickshift Analytics. Next, you interactively test the model parameters. Results are immediately shown in the visualizations.
A high number of AI algorithms are trained based on manual or semi-automatic generated training sets. In contrast to generating training sets based on images, the generation of training sets based on time dependent data is more complex. Since eye movements and sensor data belong to the class of time dependent data sets and state of the art tools were missing so far, generating training set for this kind of data required a high effort for a self-development.
For generating training sets Blickshift Analytics offers one of the most efficient solutions on the market. Again, the workflow is very simple: First, you select sections in the data and annotate these sections with labels. Second, you export these sections together with the assigned sensor and eye tracking data. Finally, you use this exported data as training sets. For generating training sets, you can choose out of two modi: You can manually select your data or you can use the support of Blickshift’s automatic components.