The driverless tech specialist developed nuReality to understand how expressive behaviours by AVs, such as flashing lights, can aid in human-machine communication with pedestrians.
Virtual reality scenarios include deliberate sounds like exaggerated braking
Driverless technology specialist Motional is making custom-built virtual reality (VR) environments used to study the interaction between autonomous vehicles (AVs) and pedestrians publicly available for the research community.
It said it is making its nuReality set of VR experiences open source to advance human-machine communication in the AV space.
Training robots
Motional is using nuReality in its expressive robotics research on how to train robots to respond in their environment similar to how a person would. It follows the creation of nuScenes, a large-scale autonomous driving dataset, that is helping spur industry-wide collaboration and further research to bring safe driverless vehicles to streets and communities faster.
Since its original release in 2019, nuScenes has been downloaded by more than 12,000 academics and researchers and referenced in more than 600 publications, according to Motional. nuScenes also kickstarted a movement of safety-focused data-sharing across the industry.
Writing in a blog post on Medium, Motional’s AV stack chief engineer, Paul Schmitt, said a key challenge to widespread acceptance and adoption of driverless vehicles is clear, safe, and effective communication between AVs and other road users. “When a pedestrian or cyclist crosses the street and a human driver isn’t behind the wheel to signal recognition and intention using, say, hand gestures or facial expressions, how will road users know the vehicle has acknowledged them and will yield to let them cross?”
“We’ve found that using expressive behaviours to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles”
Schmitt added: “We developed nuReality to understand how expressive behaviors by AVs – flashing lights and deliberate sounds such as exaggerated braking – can aid in human-machine communication with pedestrians and signal a driverless vehicle’s intentions. We’ve found that using expressive behaviours to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles and feel more confident in their decisions.
“Through our Expressive Robotics research, we realised that the benefits of using VR technology lay not only in experimental control, reproducibility, and ecological validity, but also in practicality and accessibility.”
Since testing pedestrian-crossing scenarios in real life would be complex and potentially unsafe, Motional collaborated with animation studio CHRLX to create a bespoke VR environment.
There are two vehicles in the animation files: a conventional human driven vehicle and a driverless vehicle. The animated AV includes side mirror and roof-mounted lidar sensors and no visible occupants, while the human driven model includes a driver looking ahead and remaining motionless during the interaction.
“We included numerous visual (road and building texturing, parked cars, swaying tree limbs) and audible (birds chirping, cars driving by, people talking) elements”
Motional created 10 vehicle animation scenarios, which included:
human driver stopping at an intersection
AV stopping at an intersection
human driver not stopping at an intersection
AV not stopping at an intersection
AV using expressive behaviour such as a light bar or sounds to signal its intentions
“We wanted to make this virtual environment as convincing as possible for the participants and included numerous visual (road and building texturing, parked cars, swaying tree limbs) and audible (birds chirping, cars driving by, people talking) elements,” writes Schmitt. “These details enhance place illusion and allow users to sense spatial presence within the virtual environment – giving the impression that they’re standing on an actual street.”
He explained that the VR immersion experience was so convincing that it provoked several participants to elicit instinctively angry reactions including swearing and making gestures toward vehicles that didn’t stop for them. “Given the effectiveness of the VR immersion experience and the value in studying AV-pedestrian interaction, we want to share it with the research community to further the research and ultimately, lead to AVs that are better able to integrate with their communities.”
The nuReality files can be adapted and used in a variety of applications so that others can expand upon Motional’s work in expressive robotics.
The driverless tech specialist developed nuReality to understand how expressive behaviours by AVs, such as flashing lights, can aid in human-machine communication with pedestrians.
Virtual reality scenarios include deliberate sounds like exaggerated braking
Driverless technology specialist Motional is making custom-built virtual reality (VR) environments used to study the interaction between autonomous vehicles (AVs) and pedestrians publicly available for the research community.
It said it is making its nuReality set of VR experiences open source to advance human-machine communication in the AV space.
Training robots
Motional is using nuReality in its expressive robotics research on how to train robots to respond in their environment similar to how a person would. It follows the creation of nuScenes, a large-scale autonomous driving dataset, that is helping spur industry-wide collaboration and further research to bring safe driverless vehicles to streets and communities faster.
Since its original release in 2019, nuScenes has been downloaded by more than 12,000 academics and researchers and referenced in more than 600 publications, according to Motional. nuScenes also kickstarted a movement of safety-focused data-sharing across the industry.
Writing in a blog post on Medium, Motional’s AV stack chief engineer, Paul Schmitt, said a key challenge to widespread acceptance and adoption of driverless vehicles is clear, safe, and effective communication between AVs and other road users. “When a pedestrian or cyclist crosses the street and a human driver isn’t behind the wheel to signal recognition and intention using, say, hand gestures or facial expressions, how will road users know the vehicle has acknowledged them and will yield to let them cross?”
“We’ve found that using expressive behaviours to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles”
Schmitt added: “We developed nuReality to understand how expressive behaviors by AVs – flashing lights and deliberate sounds such as exaggerated braking – can aid in human-machine communication with pedestrians and signal a driverless vehicle’s intentions. We’ve found that using expressive behaviours to help AVs communicate with pedestrians in crossing situations enables consumers to more quickly and clearly understand the intent of driverless vehicles and feel more confident in their decisions.
“Through our Expressive Robotics research, we realised that the benefits of using VR technology lay not only in experimental control, reproducibility, and ecological validity, but also in practicality and accessibility.”
Since testing pedestrian-crossing scenarios in real life would be complex and potentially unsafe, Motional collaborated with animation studio CHRLX to create a bespoke VR environment.
There are two vehicles in the animation files: a conventional human driven vehicle and a driverless vehicle. The animated AV includes side mirror and roof-mounted lidar sensors and no visible occupants, while the human driven model includes a driver looking ahead and remaining motionless during the interaction.
“We included numerous visual (road and building texturing, parked cars, swaying tree limbs) and audible (birds chirping, cars driving by, people talking) elements”
Motional created 10 vehicle animation scenarios, which included:
“We wanted to make this virtual environment as convincing as possible for the participants and included numerous visual (road and building texturing, parked cars, swaying tree limbs) and audible (birds chirping, cars driving by, people talking) elements,” writes Schmitt. “These details enhance place illusion and allow users to sense spatial presence within the virtual environment – giving the impression that they’re standing on an actual street.”
He explained that the VR immersion experience was so convincing that it provoked several participants to elicit instinctively angry reactions including swearing and making gestures toward vehicles that didn’t stop for them. “Given the effectiveness of the VR immersion experience and the value in studying AV-pedestrian interaction, we want to share it with the research community to further the research and ultimately, lead to AVs that are better able to integrate with their communities.”
The nuReality files can be adapted and used in a variety of applications so that others can expand upon Motional’s work in expressive robotics.
Recent Posts
Recent Comments
Archives
Categories