Is 9 Casualties Better Than 10? The challenge of evaluating with a lack of data and what we might do about it

 

Is 9 Casualties Better Than 10? The challenge of evaluating with a lack of data and what we might do about it

June 4, 2025

For practitioners and advocates across our sector, Vision Zero is a moral goal, one which rightfully leads to us removing risk and creating layers of protection ‘systemically’ within the Safe System. But as we deliver this change together and casualty numbers on our roads (hopefully) come down, there is a challenge for evaluators: Less Data. How can we tell what works with a scarcity of outcome data?

Nathan Harpham
Principal Consultant

Samuel Scott
Senior Consultant

Table 1 - Barriers and options with a limited ‘data pool 
Barrier 
Description  
Remedial Option
Option Implications

+

-

Limited Sample Size
Having a large enough sample size is key to meaningful results. Sample sizes that are too small render analytical methods less impactful, with results not statistically significant.
Extend evaluation period (timeframe)
Provides data representative of more medium to long term patterns, with no ‘new’ sources needed. This also helps smooth out random variations and account for ‘regression to the mean.’ 
Extending data collection periods can incur resource and time costs, as well as longer waiting times for data and final evaluation reporting.  
Extend the scope of an evaluation 
Opens up the evaluation to new areas (either new target areas or new evaluation parameters) that may increase the likelihood of gathering additional insight, such as around unintended consequences. This could take the form of expanding the evaluation design (approach type, methods, sampling) and the amount of measured or controlled variables. 
Extending the evaluation’s scope may lead to extra time in incorporating, measuring, and reporting on additional insight gathered, especially if new evaluation questions, sources or methods are introduced which fundamentally alter the evaluation’s basic design parameters. 
Diminishing ‘Rate of Returns’ 
Intervening with increased efficacy decreases casualty numbers. This leaves an ever-smaller ‘pool of data’ that exhibit patterns and trends. 
Ground intervention design in best practice and/or link to what has worked elsewhere, replicating evaluation methods where possible. 
This provides a methodological basis for evaluation where there is limited data in an area where there are already effective measures and published results. Utilising an established logic model can demonstrate value across all phases of intervention delivery. 
Leaning too heavily on best practice / guidance documentation without sufficient data to measure change can create more of theory-based premise. Although even for complex evaluations this can be the right approach to take when interactions at play are hard to detangle.  
Collaborate with partners to expand the ‘target area’ of an evaluation. 
Grants potential access to more outreach channels and hence a greater data pool (target population survey responses or numerical count data etc.). Collaborating or commissioning support can guarantee quality outputs and a strong independent voice that leads to trustworthy results. 
Expanding the target area via use of partner means responsibility for the evaluation’s delivery is likely to be shared, and if not managed sufficiently, could lead to duplication or lack of consistent data collection and even compromised reporting. 
Reliance on traditional data and analysis 
Relying on traditional sources that measure casualty numbers such as STATS19.  
With smaller numbers of casualties there is a need to look beyond these sources and to take a more proactive approach (in line with the Safe System), looking at indicators and risk rather than final outcomes. 
Supplementing traditional data with ‘proxy’ or ‘surrogate’ indicators which are inherently linked to the amount of harm on the roads 
This can massively increase the sample size. For example, ‘near miss’ events are much more frequent than collisions. Assuming that the evidenced link between an indicator or ‘proxy measure’ and the final outcome is strong, these can provide good evidence relating to the desired change e.g. reduction in casualty numbers. 
Exploration of appropriate proxy measures and indicators is time-consuming when done in isolation and can detract from the known value of using validated data. It can also be difficult to establish the evidence base behind the indicators and why they are useful. 
Integrating novel datasets and pursuing innovative methods 
Relying on traditional data and analytical methods when evaluating is a sector level risk: if new types of data and methods are not tried, tested and verified for use, then how can the sector become more confident in their use? As a sector we need to be establishing more innovative approaches to utilise the rich data environment we are in.  
The high level of technical skill and expertise incumbent in using novel datasets and methods is a persistent barrier. Investing time and resource to ensure this can be done adequately is often not feasible for stakeholders by themselves, with agreements to acquire these skills (or personnel) often more feasible alongside partner organisations.  

Related News

April 22, 2025

Report on Smart Motorways outlines National Highways approach to evaluation

In partnership with the Office of Rail and Road (ORR), Agilysis has delivered an in-depth review of National Highways' ongoing evaluation of the Smart Motorway Action Plan, a critical component of the UK’s efforts to enhance safety across the Strategic Road Network (SRN).

Contact us

Contact us to discuss your specific needs
info@agilysis.co.uk
+44 (0) 1295 731 811
X