This is the second in a series of critical articles that frame the systemic issues in the firearms and tactical training industry as resulting from two root-level failures. We believe that, no matter how much the rest of the industry advances with respect to equipment, tactics, techniques, etc., we are always going to face significant challenges developing consistently acceptable levels of performance, especially for institutional training applications, until we tackle these two fundamental issues head-on.
In the first article we talked about the first failure—how we deliver training. We hope you check out the article if you haven’t read it. The Twitter summary is that we use systems and structures for training delivery that don’t match how the human brain receives information. This is our flagship issue here at BUILDING SHOOTERS and is the subject of our first book, which is about applying modern brain-science research to improve our training structures and the resulting outcomes.
In this second article, we’re going to briefly look at the second of our fundamental industry failures: how we measure success.
Failure 2: How we measure
In a previous article we wrote critically about the use and application of standards in the industry at some length. We won’t repeat that article here; however, it’s important to understand that neither of the two common types of shooting standards provide much benefit with respect to predicting successful operational performance.
Keeping things at a high level, the industry’s failure with respect to measuring occurs for two reasons. The standards don’t measure anything of value and the way they are applied tends to negatively influence outcomes in the real-world.
The qualification courses used by most law enforcement, military, and private security or regulatory-based functions (such as concealed carry programs) are, frankly, worthless. The fact is that they don’t involve using any of the same skills that are required on the street.
To a layperson, the skills involved may appear to be the same, but once you dive into the technical shooting and neurological aspects of what’s really happening, most people performing these types of qualifications are using completely different skills (and performing them from completely different parts of the brain) than could ever be applied in a real-world shooting.
It’s like testing a teenager on the use of a riding lawn mower in the backyard—as his or her sole preparation for driving an SUV into New York City during rush hour. Nobody should be surprised if the results are something short of spectacular.
In fact, I’ll go farther. If results on the street are even adequate, it’s almost certainly based on something other than the “training” and testing provided by the institution.
The type of qualification standards used by shooting schools and special operations units are more focused and usually involve specific pass/fail events based on individual skills or skill sequences. Each individual skill or sequence is, in effect, its own test.
For measuring raw shooting skill, these standards fare much better than the more traditional qualification model. They allow both student and instructor to focus on specific things that are (or at least may be) important in the real world such as presentation from various positions, recoil management, trigger management, varying levels of accuracy at varying target sizes and distances, speed, rhythm etc.
Depending on a variety of factors (such as the number of repetitions performed over time) it’s possible, although not guaranteed, that these skills can be put into (during training) and/or performed from (during testing) the same brain areas from which shooting skills will be accessed during an actual, real-world engagement.
Note: If you’re not familiar with the different memory systems there’s a brief summary contained in this article. Our newest book Mentoring Shooters contains a detailed explanation along with how to practically apply the information to train others in mentoring environments. If you’re interested in the science, our first book Building Shooters contains the research along with a model for applying it to training system design.
These standards are more like driving a car on a track doing cone drills—in preparation for going off into New York traffic.
Sure, it’s better, but is it really what we want to rely on?
Both types of measurement models fail to even attempt to evaluate most of what is relevant on the street—where decision-making is king and fluid responses to dynamic stimuli as a situation unfolds are just as important as (if not more important than) fundamental skill. (Please note that the skills are important.)
They both can also negatively impact operational outcomes in the real-world, though, for different reasons. Because the stereotypical qualification model doesn’t normally require the development or use of operationally relevant skills, its harm is relatively straightforward to understand. The skills necessary to perform in the real-world simply aren’t ever developed at all.
There’s another harm as well—creation of a false sense of competency. This can sometimes result in either instructors teaching, or sometimes just students learning, skills and techniques that are, in fact, wrong—at least for gunfighting applications.
There is more than one way to skin the cat in most cases; however, there’s also some stuff that just doesn’t work when the rubber meets the road. Yet, sometimes these things are still learned, or even taught, based on the justification of meeting an operationally meaningless standard.
This is true in law enforcement and in the military, but nowhere is it more poignant than in the realm of concealed carry. The “standards” in these qualification courses are so low as to not really even be worth being considered shooting standards. They have no relevancy at all. Yet, we still commonly hear people who have recently finished a concealed carry course talk about their “score” on the test, how they found “what worked for them,” and equate that to their readiness for self-defense. It’s cringe inducing.
With discreet shooting standards, the potential harm to the student’s skillset is less intuitively obvious, yet still exists. As would be expected, persons tested in this manner will usually fare much better on the street than those who never develop any functional skills at all (or who develop “skills” that are functionally worthless—justified by receiving what is, in effect, a participation trophy for completing a meaningless bullet launching exercise – /rant). However, this doesn’t make the potential for relative harm (when compared to the skillset that should have been produced by the level of effort invested) any less real.
Ironically, sometimes the more difficult these types discreet standards are, the more harm that is done with respect to the overall functional skillset and eventual operational performance of the trainee.
If difficult, discreet shooting standards are but a small component of a much larger, holistic, training package (such as might happen in a special operations unit), the operational impact of this harm is significantly mitigated. Enough resources and training time are expended to compensate for and overcome it.
However, without these types of resources being put into a well-rounded training program that eventually develops a complete, functional skillset, the use of difficult, discreet standards can actually reduce the potential for positive operational outcomes.
No, you didn’t misread that.
More difficult standards can actually reduce operational performance when compared to what should have been achieved in the same training program—or to what could have been achieved with less challenging tests within the same course of instruction. (Note that we differentiate here between training exercises/drills and required standards testing/measurement.) In fact, in cases where the average student struggles to meet the program’s testing standards, this outcome of reduced operational performance is a virtual certainty.
There are several reasons for this, one of which is a phenomenon that noted human performance researcher Joan Vickers calls the paradox of motor learning research. The results of repeated research studies have demonstrated that higher end-of-training-period performance on discreet standards is not predictive of better operational performance. In fact, lower performing students will perform predictably better operationally when trained using better (interleaved) methodology. If you haven’t read Vicker’s book, The Quiet Eye in Action, you’re missing out. It is a must read for any training developer in this field.
I can already hear the question, “Are you saying we should lower our standards?”
No.
We’re saying that you should restructure your entire concept of what standards are.
When trying to address a problem, the right first step is always to take the time to thoroughly understand the problem itself. When we’re talking about gunfighting, the problem isn’t defined by a person’s ability (or lack thereof) to present the weapon and shoot quickly and accurately. That’s a component of a potential solution to the problem, and it’s important. But, it’s not the problem.
The real problem is a whole lot more complex and involves a whole lot of information processing. This processing occurs in a lot more brain areas than just those that are involved during performance of the mechanics of shooting. Examples include those involved in functions such as stimulus processing, contextual association and decision-making.
For high levels of performance to occur in an operational environment, high levels of discreet skill performance ability aren’t enough and may not even be truly necessary.
While the capability to perform shooting skills at a high level certainly isn’t harmful by itself, (we encourage development of a high-level skillset—lest there be any confusion) some of the training methods that are traditionally employed to develop those levels of discreet skill can, in fact, negatively impact operational performance potential.
In 2001 two neuroscientists (Mirman and Spivey) published an article in the journal Connection Science titled, “Retroactive Interference in Neural Networks and in Humans: The Effect of Pattern-Based Learning.” One of the things their research demonstrated was that localization of a neural network removes the generalization properties of the network.
In layman’s terms, this means that repetitively practicing discreet skills in a sterile environment, while excellent for producing reliably high levels of performance in similarly stimulus-free settings, can actually reduce the brain’s ability to integrate those skills with other neurological functions. Examples (again) include stimulus recognition and processing, contextual association, and decision-making.
Reframing that information: excessive use of these blocked training methods in sterile environments interferes with the brain’s capability to Observe, Orient, and Decide—related to the application of shooting skills.
Think about that one for a minute.
We suggest that this isn’t what we want to do, either as trainers or as students. It’s certainly not a pathway to success if we want to affect the best possible outcomes out on the street.
Please note here that we do encourage the development of high levels of fundamental skill in training programs and that we are not condemning blocked training methods. We are pointing out their limitations. Effective training development is not about a single technique or method, it’s about the right information and training techniques at the right point in a student’s development.
This is getting a little long, but there are two questions that we need to address in closing: “Why is this important?” And, “What can we do to change it?”
With respect to the first question, part of the answer is intuitively obvious (because we can stop causing harm that limits operational performance), part of it is less so.
The fact is that our systemic failures in measurement have a symbiotic relationship with our first failure – training delivery. Standards are there to be achieved—and exceeded. When they are required to be met for a job (or as a matter of pride) no matter what the intent of the related training program, the training structure and delivery will eventually shift into training for improvement on the test.
People, especially people who sign up to engage in lethal combat as a profession, are competitive – therefore the very existence of standards and qualifications tends to drive them to want to be “better.” This is a great thing. But, what happens when “better” actually isn’t?
This leads us to the second question, “What can we do about it?”
Measuring is a newer area of study for us when it comes to looking at the science. Therefore, we expect (and, in fact, intend) that the following concept will be the beginning of an idea—not the end. What that caveat, we propose a complete, fundamental restructuring of the concepts behind standards and measurement in the firearms and tactical training industry.
What we envision is an entirely different approach where, rather than qualifications being based around measuring students’ performance on a consistent set of exercises or shooting yardlines, the primary standard becomes instead documentation of the applied process of brain-based learning activities.
To help us collectively wrap our heads around this concept, let’s use a manufacturing analogy. Currently, in our fledgling idea, there are three parts.
The first part is the process itself.
When producing a product on an assembly line, each item is normally not taken off the line and tested to the limit for each possible point-of-failure to ensure that it works. Nor is each item simply assumed to be acceptable based on a meaningless and arbitrary test—one unrelated to the item’s intended performance.
Imagine a 5-ton chain-hoist factory that tests each item straight off the line by clipping a ten-pound weight to one link of the chain and seeing if it breaks the chain or not.
“Yep, here’s another one—good to go. Ship ‘er out!”
The idea is absurd. Yet—it’s exactly what we do in our predominant method of qualification in the firearms industry.
We propose that the essence of qualification instead be based on the same types of things that make for a quality manufacturing operation—namely good engineering combined with quality control for the manufacturing process itself.
The single biggest revelation that we discovered during the research for and writing of the book Building Shooters is that we now actually have the tools to do just that. Using the modeling tool and training development process outlined in the book, we can literally engineer an operational skillset to produce specific results, including the prediction and tracking of its development (neurologically) throughout the duration of a training program.
Therefore, measurement of the training process—for the construction and long-term maintenance of the operationally desired “machine”—becomes the most significant component of qualification.
The second part is stress testing—measurement of manufactured components near to the point of failure—in operationally realistic testing. Hanging 500lb weights from dental floss to see if it breaks doesn’t make any sense. Test it for what it was designed to do.
We must be careful here. On a manufacturing line, you can take a product, break it in testing, and toss it in the trash can (or strip it down for parts). You cannot do this with a student—certainly it should not be done on purpose.
Breaking a student beyond repair is absolutely possible in training, especially when using experiential methods. Please set aside your self-righteous “protector of the chalice” ego here. Every human being can be broken. This includes you and me.
Do people need to be “weeded out” of training programs for the armed professions? Unquestionably the answer is yes.
Is hardship-based attrition one of several methods that should be employed to do this? Some might argue against it but we, at least at this point in time, say yes. In our opinion this is both appropriate and critically important.
That said, do we need to destroy peoples’ skillset, and perhaps more, while we stress-test them? That answer, at least in our opinion, is most definitely no. Setting aside the morality discussion, what happens to the skillsets that aren’t broken by the process?
Hyperbole and tough-guy memes on social media aside, does a measurement tool that breaks some significant percentage of people tested benefit the skillset of the people it doesn’t break? We suggest that you should at least consider the possibility that the skillsets of those who don’t “break” are still made weaker and more fragile, not stronger and more resilient, by measurement methods that are fundamentally designed to break people individually.
Further consider the idea that hardship-based attrition is different than stress testing the operational skillset to the point of failure. The first tests the mettle and desire of the individual—and thus can be done without compromising the developed skillset, or the psychological health of the student. The second tests the manufacturing process and the associated QA/QC methods—NOT the skillset itself.
Reread that and think about it for a minute.
When a student within an institutional training program stress-tests to failure, the failure being measured is not the failure of the student as an individual, rather it is the failure of the manufacturing (training) process. Even if the problem does lie with the individual (and sometimes it is), the fact that an individual with an operational incompatibility has made it into the stress test indicates a quality control issue somewhere else in the process.
Testing to find the points of failure in any system is important to do. No process is going to be perfect and we have to find out what’s wrong with it before we can fix it. However, we don’t have to blame and denigrate the student—potentially causing permanent damage—nor do we have to structure the measurements in such a way that they are intended to break the student’s skillset.
Measure the failure points of the training system—not the failure points of the students’ individual skillsets.
The third part of the qualification concept is measurement of the individual product (student).
It’s noteworthy here to consider manufacturing standards and testing procedures. If a safety critical item is manufactured to perform at (and tested to) 5,000lbs, the actual failure point of the equipment is probably engineered closer to 15,000lbs.
We suggest a similar approach to testing the individual. Test the entire machine at the baseline operational level, not higher. In this way, you avoid the temptation to “train for the test” and all the harm that comes with it. The test is easy, at least within the context of the skillset developed by the training program. You don’t need to train for it. Instead, you spend your time training for performance on the street.
This doesn’t mean you don’t push students in training. When making components out of steel, the steel is repeated heated and battered—forged into the desired end-state. That’s how it’s made. That’s NOT how it’s measured.
This completes our pair of articles on the two systemic failures in the firearms and tactics training industry: delivery and measurement.
In conclusion, addressing these two failures systemically requires us to do two things. First, we need to use the “manufacturing process” (delivery) that will produce the results we want, by design. Then, when measuring, we need to focus on quality control for the delivery process.
We shouldn’t compromise or break the product that we have worked so hard to build, and we shouldn’t interfere with the delivery by substituting our standards of measurement for the operational requirements of the product (student). Doing so is not only a waste of time and resources, it’s also damaging to the lives of ALL who are affected by the suboptimal operational outcomes that result from or are impacted by our industry’s two systemic failures.