By Evan Sinar, Ph.D.
In recent years, the humble egg, once viewed as a shell-encased, dietary time bomb, has been increasingly lauded as an ideal high-protein food (so long as you stick to the egg whites). In a similar way, the changed perception of the risks/benefits of eggs mirrors the changing perceptions about cognitive testing.
Cognitive testing long ago fell out of favor due to a variety of concerns including those related to legal defensibility and test security. But as jobs grow in complexity, the need to understand a candidate’s level of intelligence and problem-solving ability in order to project his or her job effectiveness has remained, and, in many cases, even increased. Today, employees have more opportunities than ever to apply cognitive skills on the job in the pursuit of organizational goals. In fact, the move away from cognitive testing has coincided with a confluence of offsetting trends that have made cognitive ability skills—and the job competencies they connect to—increasingly valuable.
Just as what happened with all of the bad press that eggs once received, the naysaying about cognitive testing, ultimately, was overblown. There was no need to avoid eggs; people just needed to be smarter about how they consumed them. The same goes for cognitive testing. Organizations can do cognitive testing, they just need to be smart about it. Fortunately, advances in testing methodology and technology have allayed most of the long-held concerns.
So what changed? Plenty, as it turns out. “Adaptive” cognitive testing instruments are now available that remedy many of the old concerns and offer new advantages.
Here’s how these new instruments address the most common concerns:
They safeguard against cheating. With some of the new adaptive instruments, the test is structured so that each person taking it may see a different set of items, presented in a unique sequence. There are some forms of cheating you simply can’t avoid without having the candidate come in and take the test in person (for example, you can’t prevent someone from having a smart friend sit next to the candidate and tell him or her what the right answers are). In terms of large-scale cheating, however, such as where an answer key is created and widely shared, the new adaptive instruments effectively make that form of cheating extremely difficult, if not impossible.
They are more engaging and perceived as more credible. A breakthrough feature in one or more of the adaptive cognitive tests now available prevents candidates from disengaging from the testing process because they view the test as either too easy or too difficult. With these instruments, after the candidate has input an answer for an item, whether or not the answer is correct will determine the difficulty of the next item presented. In other words, as far as difficulty goes, the questions will always be “just right.” When paired with test content that avoids perceptions of being overly specific and unrelated to the job for which candidates are applying, this item targeting leads candidates to feel as if each test item is truly reflective of their ability.
They are just long enough to be accurate. One of the foundational ideas behind adaptive cognitive tests is that every single item is calibrated to the item that’s best tailored to the person’s ability level, as described above. With every item targeted directly at discerning a candidate’s true ability, less time is needed to come up with an accurate estimate of his or her cognitive skill. That means an organization can get the same precision from a 15-item test as from a traditional test that might include 40 or 50 items—a difference to which candidates are likely to react favorably.
They are more fair. The new adaptive cognitive tests replace the text-based items that predominated in the past with non-text-based items. One major benefit of this is that it eliminates the opportunities for differing interpretations of text-based items that might be grounded in cultural or language differences. Therefore, it mitigates the risk of adverse impact when the test is included as part of a selection system. It also allows for global delivery of the test, as language is no longer a barrier.
As jobs continually change, it’s a good idea to periodically look at your selection systems to make sure the tools you are using meet all of your criteria. Now is also a good time to examine (or reexamine) your systems through the lens of cognitive ability. As your jobs evolve, do they require more advanced reasoning skills? And do the assessments you are currently using help you get at the cognitive data you need? What are the potential consequences of your organization not assessing for cognitive ability?
Answers to questions such as these can help to guide you as you refine your selection system. And unlike as it was in the past, you should feel good about seriously considering adding a cognitive test to your arsenal.
Evan Sinar, Ph.D., is Director of DDI's Center for Analytics and Behavioral Research (CABER) and Chief Scientist.
For a more in depth look at how cognitive tests have evolved in recent years, read “Cognitive Testing Has Come Out of Its ‘Shell’” or learn more about DDI’s Adaptive Reasoning Test.