The driverless tech specialist developed nuReality to understand how expressive behaviours by AVs, such as flashing lights, can aid in human-machine communication with pedestrians.
Driverless technology specialist Motional is making custom-built virtual reality (VR) environments used to study the interaction between autonomous vehicles (AVs) and pedestrians publicly available for the research community.
It said it is making its nuReality set of VR experiences open source to advance human-machine communication in the AV space.
Motional is using nuReality in its expressive robotics research on how to train robots to respond in their environment similar to how a person would. It follows the creation of nuScenes, a large-scale autonomous driving dataset, that is helping spur industry-wide collaboration and further research to bring safe driverless vehicles to streets and communities faster.
Since its original release in 2019, nuScenes has been downloaded by more than 12,000 academics and researchers and referenced in more than 600 publications, according to Motional. nuScenes also kickstarted a movement of safety-focused data-sharing across the industry.
Writing in a blog post on Medium, Motional’s AV stack chief engineer, Paul Schmitt, said a key challenge to widespread acceptance and adoption of driverless vehicles is clear, safe, and effective communication between AVs and other road users. “When a pedestrian or cyclist crosses the street and a human driver isn’t behind the wheel to signal recognition and intention using, say, hand gestures or facial expressions, how will road users know the vehicle has acknowledged them and will yield to let them cross?”
“We’ve found that using expressive behaviours to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles”
Schmitt added: “We developed nuReality to understand how expressive behaviors by AVs – flashing lights and deliberate sounds such as exaggerated braking – can aid in human-machine communication with pedestrians and signal a driverless vehicle’s intentions. We’ve found that using expressive behaviours to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles and feel more confident in their decisions.