How body camera footage can enhance officer training
New research involving body camera footage explores situational and dynamic factors associated with decision-making and efficacy of police training
Few forces are impacting law enforcement like video. Policing in the Video Age, P1's yearlong special editorial focus on video in law enforcement, aims to address all facets of the topic with expanded analysis and reporting.
In the third installment of this four-part signature coverage effort, we address training & policy in a recorded world. Click here to learn more about the project.
Navigating the complexity of BWCs is a challenge police departments continue to face. If you’re in need of BWC training for your department, PoliceOne Academy has several online courses available, including “How to Implement a BWC Program.” Start your path to becoming an expert by visiting PoliceOneAcademy.com and submitting a request to learn more.
In spite of the potential benefits of using body-worn camera footage to improve community interaction, increase officer safety and evaluate training, police departments are only minimally using the information available at their fingertips. The crux of the problem comes down to time: It is impossible for agencies to dedicate the manpower required to review hundreds of thousands of hours of footage generated by body-worn cameras.
Criminal justice experts at Washington State University (WSU) are hoping to solve this problem by using advanced scientific tools and techniques – such as data analytics, biometrics and machine learning – to examine the complex factors that shape interactions between police and community members.
Researchers in the new Complex Social Interaction (CSI) laboratory at WSU are designing algorithms and software that analyzes body-worn camera footage. Led by Dr. David Makin, the research team includes Dr. Dale Willits within the Department of Criminal Justice and Criminology, Dr. Rachel Baily within the Murrow College of Communication, and Dr. Bryce Dietrich from the University of Iowa, Iowa Informatics Initiative (UI3).
Since its launch early this year, the lab has analyzed more than 2,000 police–community interactions and numerous records from law enforcement incidents to identify, code and catalog key variables associated with a range of outcomes, positive to negative. Location, lighting, time of day, number of people present, gender, race, verbal and physical stress, and intensity of the interaction are among the contextual factors assessed.
PoliceOne recently spoke with lead researcher David A. Makin, PhD, an Assistant Professor at the Department of Criminal Justice and Criminology at WSU and a member of the research faculty for the Washington State Institute for Criminal Justice about how his team is turning body-worn camera footage into actionable data.
How did you become involved with this research?
As a technologist, I adamantly believe we can reduce risk, enhance officer performance, and improve officer health and safety by converting body-worn camera footage into actionable data.
I have spent several years working on how to analyze this footage and convert it into data. Currently, we have instruments and algorithms in the lab for the study of use of force, intensity/emotional states and behavior, and detection of key performance indicators for risk management and evaluating training effectiveness.
In policing, you can do everything by the book and still have a poor outcome. You can do everything badly and somehow have a good outcome. If we can study or analyze what takes place in interactions and then develop training around it, we can use that as part of risk management.
We forget that police interactions can be inherently complex. Many departments wait for something bad to happen and then they go back and hope they have footage of the incident to figure out what transpired. The CSI lab facilitates the study of policing beyond merely the outcome to understanding what took place in an interaction. We are trying to give agencies the technology they need so they can use this footage to identify if officer training is working, when training needs to occur and when officers need refresher training, all while improving the overall state of police community interaction and officer health and safety.
What was the genesis behind the CSI lab?
The CSI lab came about as an interdisciplinary, intercollegiate commitment by all the core researchers to help agencies make use of body-worn camera data.
We have been using machine learning and artificial intelligence to help agencies make the leap from having footage to having data.
For example, we are developing a classifier that allows agencies to scan footage for specific types of incidents. Let’s say we have an intensity classifier, so if you wanted to know which incidents are really intense, you could run a scan on thousands of hours of footage and pull up the incidents that are outliers among hours of benign incidents. Then when you view those calls, you can see problems developing early on during the response. This allows agencies to ask: Why is this becoming more intense? What could we do to reduce the intensity? Does verbal de-escalation training work? Does CIT work? Does having police officers go through implicit bias training change their behavior in the field?”
From an officer’s perspective, we know that early intervention is key, but if you use your body-worn camera footage as a risk management tool only after something bad happens, then you miss all the steps that lead to “bad outcomes” that could be used as predictors ahead of time.
How did you make the decision to analyze use-of-force incidents?
We started with use-of-force projects as most of the research out there merely looks at whether use of force occurred and does not consider how it occurred.
If you think about force and the sequence of events, you can measure the time of events: How quickly use of force occurred during the interaction, how long the force was applied and the severity of the force applied. This changes how we think about force.
You could look at an agency’s data to see if officers are disproportionately using force against a particular group, but when you analyze how the use of force happens, you see all of those instances were legitimate uses of force. That is an entirely new story for an agency to be able to tell: We are doing everything in a legitimate, open and transparent way.
The other side of this is you can look at an agency’s data where it appears they are not using use of force disproportionately, but when you analyze their footage at this microsocial level, you identify they are vastly quicker to deploy force with certain groups.
You can also show an officer is using force in a way counter to how they have been trained. If we can intervene in that and say to a department, “This officer is incorrectly applying an LVNR and is using a dangerous technique,” it offers opportunities for training interventions.
We are currently finalizing a series of studies associated with use of force. We published the first early this year in the Journal of Research in Crime and Delinquency. The second explores intensity states and use of force. The third examines situational, environmental and behavioral factors associated with police and suspect emotional states. The final study examines severity and timing of use of force.
What has surprised you most about your findings?
Using our intensive coding process, we analyzed over 5000 hours of police interactions and are starting to ascertain officers have qualities we want to learn more about.
In some situations, based on our data, everything would suggest that use of force would occur, but we have been able to identify officers who are effective de-escalators with remarkable interpersonal skills. If we have years’ worth of their footage analyzed, we can see whether we developed these skills during training programs, whether it is based on experience or whether they are just innately effective verbal communicators.
We code over 100 time permit variables. We look at how long the officer spoke, how long the suspect spoke, any community conversation and how often there were interruptions. We can look at the totality of the interaction, which opens up entirely new approaches for a risk management program. This is no different from what we have done for decades with an FTO program, where field training officers supervise new recruits.
You will be generating your own research footage via cadets enrolled in WSU's Police Corps program this fall. Can you tell us about that initiative?
Policing is about protocols. We train officers and cadets to do very specific things every single time. When we train, we have key performance indicators for any given interaction. For example, there are five things that must occur on every high-risk traffic stop or three things you must do when you conduct a search and seizure. These things vary based on state law and how agencies interpret it, but all agencies have these KPIs.
We are developing KPIs associated with police training that we will field this fall in a police cadet training program. KPIs are dichotomies – they are either the officer did this or they didn’t – so working with the cadet training officers, we can develop a series of classifiers that look for specific KPIs to show training progress.
Axon provided me with access to 60 body-worn cameras and storage to outfit the cadet program and generate footage. We use machine learning and artificial intelligence to teach computers how to look for things in video. One of the problems you run into when you develop a classifier is you need a massive amount of data as you need iteration upon iteration of different objects and viewpoints.
Working with agencies, use of force in the field is rare, so trying to develop a classifier around use of force to automatically detect it in body-worn camera footage can be challenging, as you do not have enough data to teach the computer.
With the cadet program, they go through use-of-force training and have mock exercises and drills that allow us to generate footage from multiple perspectives with different camera settings, viewpoints and mounting options. This allows us to speed up the development of tools agencies can use to automate analysis of the footage.
What we hope to do is deploy these KPIs at the agency level. From a risk management perspective, you would be able to automate the analysis of, for example, high-risk traffic stops, and hopefully, 95% of the time officers are meeting the KPIs for these stops. The cadet program will allow us to automate analysis of body-worn camera footage so it becomes actionable data from a risk management perspective.
When I ask police chiefs if they would want to know that 40% of the time during high-risk traffic stops their officers are making mistakes that put themselves and others at risk, they all nod their heads. Our research is about creating a baseline so we know how an agency is performing by its own benchmarks. If we know certain interactions can escalate into situations where use of force is required, we can develop training that enables police officers to handle these calls. Right now if you look at the research in policing, we do not know if this is the case. We assume all our training works, but if it doesn’t change behavior in the field, then why are we doing it? If it is “check-the-box” training, we are wasting money and time and police officers will view training as just perfunctory. At the end of the day, training has to impart new skills and, with this research, we can validate if training is working.
How do you see this research being practically implemented?
It is one thing to have video footage; it is something entirely different to help agencies automate the analysis. Even something as basic as answering the question of, “What type of videos should I look at?” would be extremely valuable to a patrol sergeant. Being able to automate the analysis so at the end of the week a sergeant receives a notification that there are 20 videos to review instead of 400 is how we will move policing into the 21st century.
For more information, contact Dr. Makin at firstname.lastname@example.org.