A name can matter a lot. When social science researchers wanted to make a distinction between how students approached different aspects of the learning process, they coined the somewhat awkward term 鈥渘oncognitive鈥 to distinguish attitudes, beliefs, and attributes from content knowledge, which they labeled 鈥渃ognitive.鈥 They applied their newly minted term to identify everything that was not, in their view, grounded in, or directly derived from, rational thought. This distinction reflected the idea that one type of thinking formed the basis of knowing and recalling information, and that the other originated in beliefs, attitudes, and feelings.
Perhaps it鈥檚 time to move beyond our current overly cautious approach to measuring elements of the learning process that extend beyond content knowledge. Perhaps it鈥檚 time to think of noncognitive dimensions of learning as forms of thinking, rather than as a process that does not involve cognition.
Are we not observing a higher form of thinking when we see students persist with difficult tasks, such as overcoming frustration; setting and achieving goals; seeking help; working with others; and developing, managing, and perceiving their sense of self-efficacy? Are these qualities not at least as important as knowing how well students recall information about the year in which the Civil War began, or how to factor a polynomial? Might what we observe when we look for noncognitive factors be a more complex form of cognition鈥攁 result of executive functioning by the brain as it monitors and adjusts to circumstances to accomplish specific aims and objectives? In other words, might these behaviors be manifestations not of feelings, but of metacognition鈥攖he mind鈥檚 ability to reflect on how effectively it is handling the learning process as it is doing so?
During the past century, cognitive measures have ascended to the pinnacle of credibility, while noncognitive factors have languished in relative obscurity. Consumers of content-knowledge tests assumed that noncognitive indicators lacked the methodological rigor that was the hallmark of their cognitive siblings. Psychometricians dedicated considerable resources over the years to improving cognitive measures, while they devoted relatively less effort and energy to creating noncognitive tools of comparable rigor.
Are we not observing a higher form of thinking when we see students persist with difficult tasks, such as overcoming frustration?"
Standardized-test developers created knowledge examinations that were assumed to be free of noncognitive influences. The tests didn鈥檛 attempt to gauge the processes students used to learn. Over time, educators and policymakers alike came to view information from noncognitive instruments as less credible than the results from content tests. Tools such as surveys, self-reports, interviews, and third-party ratings were seen as yielding information that was distinct and separate from (and less valuable than) the cognitive data collected by standardized tests.
In essence, researchers and educators created a self-fulfilling prophecy. They anointed cognitive measures as 鈥渟cientific鈥 and pronounced noncognitve techniques as incapable of meeting the technical standards they had developed for content tests. Because experts judged noncognitive methods against inappropriate standards, all noncognitive approaches came to be like the guy or girl who gets all dressed up for the party but never gets asked to dance. Over time, the very name noncognitive came to symbolize, among educators and policymakers, that the information generated by the tool or instrument would be of limited use and value to shape curriculum, instruction, or program improvement, let alone be factored into accountability systems.
We might better describe what has previously fallen under the label of noncognitive factors as 鈥渕etacognitive learning skills.鈥 While metacognition is a broad term, I apply it here to describe a subset of all its facets, just as cognitive content knowledge represents a subset of cognition. My definition of metacognitive in this context includes all learning processes and behaviors involving any degree of reflection, learning-strategy selection, and intentional mental processing that can result in a student鈥檚 improved ability to learn.
If we were to apply the term metacognitive learning skills to describe the full range of behaviors, attitudes, and beliefs students demonstrate while engaging in the learning process, we could establish semantic parity between cognitive knowledge and noncognitive skills. This would be a monumental accomplishment that could lead to a dramatic increase in the development and use of new tools and techniques designed specifically to help us develop insight into student learning strategies. Gaining this type of insight would enable educators to teach students how to learn, as well as what to learn. It would also enable students to take more ownership and control over their own learning.
What if we were equally diligent in gathering and reporting information on both the cognitive and metacognitive aspects of learning? If we were to see the two terms as equals, the potential importance of the metacognitive would become clearer. The relationship between the two would be less hierarchical, more symbiotic.
Although some would argue that the answer to improving student achievement is simply to measure and analyze content-knowledge acquisition with increasing detail or frequency, we seem, in fact, to be reaching a point of diminishing returns as we parse data on content knowledge into ever smaller pieces with more detailed prescriptions for correcting errors or remedying deficits.
What if we had an equivalent amount of information about learner skills, attitudes, and techniques that we could bring to bear along with content-knowledge measures? What sort of individualized profile might we be able to create for each learner?
We need to know all we can about students鈥 content knowledge. But we also need to know much more about how students manage the learning process, and how their beliefs about themselves as learners affect their ability to understand and retain content knowledge. By elevating noncognitive information to an equal position relative to content knowledge, we may find the missing link needed to close the achievement gap more rapidly and effectively for the many students who possess the cognitive ability to improve their capacity to learn, but are limited by a lack of effective learning strategies and the appropriate mindset. As a first step down this road, educators, researchers, and policy leaders must be willing to rename 鈥渘oncognitive measures鈥 as 鈥渕etacognitive learning skills.鈥
My colleagues and I are willing to do so in our publications and presentations. And we encourage others to do the same. This small change can have a tremendous effect on how we think about learning and how we measure it. What鈥檚 in a name? Quite a lot, as it turns out.