This is the principle finding of the D-Risk co-innovation project which drew on a range of data sources, including millions of hours of footage from CCTVs and dashcams and crowdsourced public stories of near-miss accident scenarios.
Autonomous vehicles (AV) trained using extreme one-in-a-million accident data and near-miss scenarios can achieve a six-fold improvement on the detection of a collision risk posed by other road users compared to vehicles being trained using traditional approaches, according to research.
This is the key finding of D-Risk, a co-innovation project part funded by the Centre for Connected and Autonomous Vehicles, comprising dRISK.ai, DG Cities, Claytex and Imperial College London.
Data sources
Research conducted in the UK in February and March 2022, looked at millions of hours of footage from CCTVs and dashcams covering a wide variety of traffic conditions, hundreds of thousands of accident reports, and crowdsourced public stories of near-miss and one-in-a-million chance accident scenarios. It also included a Nasa-inspired failure mode prediction technique designed to reveal rare incidents, or ‘edge’ cases, that would be easy for humans to negotiate but hard for AVs.
The repository was used to identify the cases weighted strongly towards the most unusual high-risk circumstances. D-Risk then used these to retrain the perceptual and control subsystems in AVs to deal with risky scenarios with greater accuracy.
“This is the sort of research regulators are asking for and need to see reflected in AV pilots, because it can help inform urban strategy, AV policy, insurance, safety standards and licencing”
Other significant findings includes evidence that AVs are twice as likely to be accurate in their detection of a collision risk without compromising performance on detecting other more frequent types of accident, and can achieve a 20 times improvement on the ability to contend with highly difficult traffic conditions that would otherwise lead to serious or fatal accidents, without decreasing performance on handling everyday conditions.
The findings are summarised in a paper entitled “Virtual verification of decision making and motion planning functionalities for AVs in the urban edge case scenarios”, which has been submitted and accepted by the Society for Automotive Engineers (SAE). It will be used by policy-makers to make a stronger correlation between safety and the types of edge case accidents that make an AV fail if they are not included in design.
“No deployment has yet been able to demonstrate this kind of accuracy when it comes to road safety,” said Chess Stetson, CEO at dRisk.ai. “To be commercially viable, driverless cars are going to have to deal with one-in-a-million edge cases – the complex, high-risk scenarios, which are individually unlikely but collectively make up the majority of risk. They include everything from poorly marked construction zones, abandoned vehicles, and oddly placed traffic cones to more extreme cases of wild animals in the road.
“This is a ground-breaking piece of research because these are the cases developers in labs don’t plan for, yet are critical for safety training. Collectively, these results point to a new way of developing highly versatile autonomous vehicles, which will be ready to achieve the safety and cost efficiency promise of driverless cars. Fundamentally, this is the sort of research regulators are asking for and need to see reflected in AV pilots, because it can help inform urban strategy, AV policy, insurance, safety standards and licencing.”
Public perception of driverless vehicles
D-Risk also explored the UK public’s perception of AVs and found that there is a large gap between perceived and actual safety that manufacturers, developers and regulators need to address.
D-Risk ran dedicated focus groups and asked people to observe pairs of simulated videos of reconstructed accidents involving a sudden stop, turning right and overtaking a bike. Participants weren’t told whether they were watching a human driver or a driverless vehicle. In all three scenarios, people judged humans to be more dangerous, less predictable, slower and less accurate in their decision making than AVs.
Research also uncovered that only 36.4 per cent of people would be happy to ride in an AV if they were offered the chance tomorrow. Twenty-nine per cent are undecided but can be persuaded AVs are trustworthy when given the option to take part in a trial or learn more about the technology.
“The research also highlights that when you actively offer the public opportunities to experience AVs you can truly move perceptions of safety and trustworthiness. But perception isn’t enough. Accuracy is critical”
In parallel, the Imperial College team performed large-scale group virtual reality (VR) experiments that measured participants’ movement around AVs. This provided more ways of identifying high risk edge cases related to how pedestrians react to AVs and how reactions alter when things change, like the weather.
“The research into perception illustrates a critical intersection between AV development and public education and engagement,” said Ed Houghton, head of research and service design at DG Cities. “The only way to ensure that fears and concerns are addressed is to design the technology with them right from the beginning. This is about designing for diversity and reflecting differing perceptions of the definition of ‘safe’.
“The research also highlights that when you actively offer the public opportunities to experience AVs you can truly move perceptions of safety and trustworthiness. But perception isn’t enough. Accuracy is critical. As this project proves, engaging the public on the development of AV training models shouldn’t be underestimated by regulators, manufacturers and developers alike.”
A copy of Virtual verification of decision making and motion planning functionalities for AVs in the urban edge case scenarios is available to download here.
This is the principle finding of the D-Risk co-innovation project which drew on a range of data sources, including millions of hours of footage from CCTVs and dashcams and crowdsourced public stories of near-miss accident scenarios.
Autonomous vehicles (AV) trained using extreme one-in-a-million accident data and near-miss scenarios can achieve a six-fold improvement on the detection of a collision risk posed by other road users compared to vehicles being trained using traditional approaches, according to research.
This is the key finding of D-Risk, a co-innovation project part funded by the Centre for Connected and Autonomous Vehicles, comprising dRISK.ai, DG Cities, Claytex and Imperial College London.
Data sources
Research conducted in the UK in February and March 2022, looked at millions of hours of footage from CCTVs and dashcams covering a wide variety of traffic conditions, hundreds of thousands of accident reports, and crowdsourced public stories of near-miss and one-in-a-million chance accident scenarios. It also included a Nasa-inspired failure mode prediction technique designed to reveal rare incidents, or ‘edge’ cases, that would be easy for humans to negotiate but hard for AVs.
The repository was used to identify the cases weighted strongly towards the most unusual high-risk circumstances. D-Risk then used these to retrain the perceptual and control subsystems in AVs to deal with risky scenarios with greater accuracy.
“This is the sort of research regulators are asking for and need to see reflected in AV pilots, because it can help inform urban strategy, AV policy, insurance, safety standards and licencing”
Other significant findings includes evidence that AVs are twice as likely to be accurate in their detection of a collision risk without compromising performance on detecting other more frequent types of accident, and can achieve a 20 times improvement on the ability to contend with highly difficult traffic conditions that would otherwise lead to serious or fatal accidents, without decreasing performance on handling everyday conditions.
The findings are summarised in a paper entitled “Virtual verification of decision making and motion planning functionalities for AVs in the urban edge case scenarios”, which has been submitted and accepted by the Society for Automotive Engineers (SAE). It will be used by policy-makers to make a stronger correlation between safety and the types of edge case accidents that make an AV fail if they are not included in design.
“No deployment has yet been able to demonstrate this kind of accuracy when it comes to road safety,” said Chess Stetson, CEO at dRisk.ai. “To be commercially viable, driverless cars are going to have to deal with one-in-a-million edge cases – the complex, high-risk scenarios, which are individually unlikely but collectively make up the majority of risk. They include everything from poorly marked construction zones, abandoned vehicles, and oddly placed traffic cones to more extreme cases of wild animals in the road.
“This is a ground-breaking piece of research because these are the cases developers in labs don’t plan for, yet are critical for safety training. Collectively, these results point to a new way of developing highly versatile autonomous vehicles, which will be ready to achieve the safety and cost efficiency promise of driverless cars. Fundamentally, this is the sort of research regulators are asking for and need to see reflected in AV pilots, because it can help inform urban strategy, AV policy, insurance, safety standards and licencing.”
Public perception of driverless vehicles
D-Risk also explored the UK public’s perception of AVs and found that there is a large gap between perceived and actual safety that manufacturers, developers and regulators need to address.
D-Risk ran dedicated focus groups and asked people to observe pairs of simulated videos of reconstructed accidents involving a sudden stop, turning right and overtaking a bike. Participants weren’t told whether they were watching a human driver or a driverless vehicle. In all three scenarios, people judged humans to be more dangerous, less predictable, slower and less accurate in their decision making than AVs.
Research also uncovered that only 36.4 per cent of people would be happy to ride in an AV if they were offered the chance tomorrow. Twenty-nine per cent are undecided but can be persuaded AVs are trustworthy when given the option to take part in a trial or learn more about the technology.
“The research also highlights that when you actively offer the public opportunities to experience AVs you can truly move perceptions of safety and trustworthiness. But perception isn’t enough. Accuracy is critical”
In parallel, the Imperial College team performed large-scale group virtual reality (VR) experiments that measured participants’ movement around AVs. This provided more ways of identifying high risk edge cases related to how pedestrians react to AVs and how reactions alter when things change, like the weather.
“The research into perception illustrates a critical intersection between AV development and public education and engagement,” said Ed Houghton, head of research and service design at DG Cities. “The only way to ensure that fears and concerns are addressed is to design the technology with them right from the beginning. This is about designing for diversity and reflecting differing perceptions of the definition of ‘safe’.
“The research also highlights that when you actively offer the public opportunities to experience AVs you can truly move perceptions of safety and trustworthiness. But perception isn’t enough. Accuracy is critical. As this project proves, engaging the public on the development of AV training models shouldn’t be underestimated by regulators, manufacturers and developers alike.”
A copy of Virtual verification of decision making and motion planning functionalities for AVs in the urban edge case scenarios is available to download here.
Recent Posts
Recent Comments
Archives
Categories