Cognition and Metacognition within Self-Regulated Learning

Authored by: Philip H. Winne

Handbook of Self-Regulation of Learning and Performance

Print publication date:  September  2017
Online publication date:  August  2017

Print ISBN: 9781138903180
eBook ISBN: 9781315697048
Adobe ISBN:




This chapter describes the complex fusion of cognition, metacognition and motivation that is self-regulated learning (SRL) and identifies key foci for future research and practice. Following a selective recap of cognition and metacognition, SRL is characterized using two perspectives: Winne and Hadwin’s (1998, Winne, 2001) loosely sequenced, recursive four-phase model and Winne’s (1997) COPES model that identifies facets of a task wherein learners exercise SRL. Key challenges learners face are developing study tactics and learning strategies that SRL manages, and articulating the role of motivation in SRL. With these topics as backdrop, three goals are highlighted for future research: using data from multiple channels, tracing motivation as a dynamic variable over the timeline of a task and the critical need to better trace metacognitive monitoring and control. A strong recommendation is offered for reconceptualizing practice in ways that support learners as learning scientists who experiment with “what works” as they self-regulate learning.

 Add to shortlist  Cite

Cognition and Metacognition within Self-Regulated Learning

This chapter describes the complex fusion of cognition, metacognition and motivation that is self-regulated learning (SRL) and identifies key foci for future research and practice. Following a selective recap of cognition and metacognition, SRL is characterized using two perspectives: Winne and Hadwin’s (1998, Winne, 2001) loosely sequenced, recursive four-phase model and Winne’s (1997) COPES model that identifies facets of a task wherein learners exercise SRL. Key challenges learners face are developing study tactics and learning strategies that SRL manages, and articulating the role of motivation in SRL. With these topics as backdrop, three goals are highlighted for future research: using data from multiple channels, tracing motivation as a dynamic variable over the timeline of a task and the critical need to better trace metacognitive monitoring and control. A strong recommendation is offered for reconceptualizing practice in ways that support learners as learning scientists who experiment with “what works” as they self-regulate learning.

Theoretical Lenses for Viewing Self-Regulated Learning


The origin of the word cognition is the Latin cognoscere meaning essentially “to come to know.” Coming to know is a process that takes in information—input—and produces information—output. Kinds of information processed in cognition are diverse. Fundamentally, they correspond to kinds of information available to the human senses plus one kind of information humans invented—symbol systems. In school, the most dominant symbol systems are text, mathematics and diagrammatic representations of various sorts.

Cognitive processes that operate on information are commonly named with reference to a result of the operation: encoding creates an encoded form of information; retrieving brings previously encoded information from long-term memory into working memory where it can be operated on further. A short list of commonly described cognitive processes also includes: comprehending, predicting, solving, reasoning and imaging. Much cognition in school and life engages operations that are learned algorithms or heuristics designed to accomplish people’s goals. Examples are long division, learning strategies and mnemonics (e.g., first-letter acronyms, the alphabet song) and rules (e.g., i before e except after c). Metaphorically, the mind is programmable. Many operations require information to be in a prepared form. Examples are using an index to a book and proving a geometric theorem using a succession of previously established theorems and axioms.

Learned processes have a typical developmental trajectory. At first, they are quite observable and very effortful. Often, a learner verbally or subvocally describes each component or step before as it is carried out. Transitions across steps or stages in multi-step processes at early stages of learning are tentative, and the intended result of a step is not reliably realized. With practice, steps reliably lead to the intended result, setting a stage to fuse them into a smooth series. As this happens, adjacent steps form subunits that become increasingly difficult to disassemble. After extensive practice, learned processes become automated. Automated processes are carried out rapidly, they reliably produce intended results and typically “run off” without one’s noticing. If one tries to disassemble an automated process, the process often shows a dramatic reduction in pace and may even spawn errors.

In contrast to cognition that operates on information by a learned automated procedure, other cognitive operations are basic or “primitive.” These are possibly innate to the human cognitive system and they resist analysis into simpler forms. Notwithstanding, learners can engage both learned and basic operations mindfully, with purpose. I proposed a set of five basic cognitive operations: searching, monitoring, assembling, rehearsing and translating (Winne, 1985, 2010a). Table 3.1 defines each of these five basic cognitive operations and provides examples. If I apply one learned cognitive tactic I know, assembling a first-letter mnemonic, this set of cognitive operations can be encapsulated by the acronym SMART.

Table 3.1   Basic cognitive processes





Directing attention to information that meets standards.

Retrieving the chemical symbol for gold (Au).

Paging through a chapter to locate a fact.


Identifying whether or the degree to which information corresponds to standards.

Judging whether to use “affect” or “effect.” Checking the steps in solving a problem.


Joining previously separate information by identifying a relationship.

Linking items and labeling links in a concept map. Developing a timeline of events.


Preserving or re-instating information in working memory.

Rotely rehearsing definitions of terms. Practicing typing on a keyboard.


Transforming the representation of given information.

Graphing the parabola y = x2 - 2x + 4. Paraphrasing a famous quote.

It is often challenging for students (and other thinkers) to thoroughly and reliably observe their cognitive operations. When information about cognition is not directly available or is missing, people typically make inferences about cognitive operations. Ingredients for inferences are mainly: (a) changes in properties of output(s) compared to input(s), (b) time between input event and output events (i.e., latency), and (c) behaviors that can be made observable by supplementing memory and one’s sense impressions with instrumentation: eye tracking gear can record visual searches, highlighting tools permanently identify text that was monitored and judged to have particular attributes (e.g., “That’s important”), and a video can record a search for information when using a book’s index or translating numbers into counts represented by extended fingers.


Turning to metacognition, meta originates in the Greek meaning principally “after” or “beyond.” Its use in English often signifies “about” the category that is modified by “meta.” Meta-X is information about X. In this sense, metacognition is cognition about the information input to or output by cognition, as well as information about the operations that work on information. An important feature of metacognition is that what differentiates it from cognition is not the operations involved. I argue the same fundamental cognitive processes are used in cognition and in metacognition (Winne, 2011).

In other words, the topics of metacognition are qualities of thoughts and thinking. Here is an example of metacognition’s appearance in a learned form of cognition, a basic study tactic. As a learner studies an assigned chapter, each time a term appears in italics, as identified by monitoring for this typographical cue, the learner searches the text for information that matches the common form of a definition (e.g., monitoring for cues like “… is defined as” or “…, meaning”), translates the features provided by the cued information into an example by calling on (i.e., searching) prior knowledge and checks (i.e., monitors) that each key feature is represented in the constructed example. Upon completing this study tactic, the learner metacognitively thinks, “That worked quite well these last few times.” Here, the learner is monitoring qualities of products of the study tactic. Those qualities might describe that the tactic: (a) completes reliably, (b) is not too effortful, (c) can be executed rapidly and (d) boosts confidence in a judgment about how well material is understood. The learner adds to these thoughts, “… and I feel pretty confident it will help me on the test.” This involves recalling meta-features of test items and test taking experiences, such as: (a) knowledge of definitions is often called for, and (b) confidence in test answers is higher for items that ask for definitions when those definitions were studied using the tactic.

Like theories of cognition, theories about metacognition also are diverse. Research has investigated metamemory—what a learner knows about how memory works and factors that influence the retrievability of information (see Thiede & de Bruin, 2018/this volume); metacognition—what a learner knows about cognitive events, including the probability they generate a successful product, the typical pace of particular forms of cognition, factors that affect cognition such as load and vigilance; and meta-emotion—how a person feels about the experience of a particular emotion (see Efklides, Schwartz, & Brown, 2018/this volume).

Nelson and Narens (1990) provided a precise description of metacognition:

  • Principle 1. The cognitive processes are split into two or more specific interrelated levels … the meta-level and the object-level.
  • Principle 2. The meta-level contains a dynamic model (e.g., a mental simulation) of the object-level.
  • Principle 3. There are two dominance relations, called “control” and “monitoring,” which are defined in terms of the direction of the flow of information between the meta-level and the object-level.
Nelson and Narens’s third principle can be usefully represented in the form of a production system: if–then. For example, if information at the object level is monitored according to a profile of attributes and is determined to differ sufficiently from a meta-level profile, then exercise agency to modify cognition at the object level by searching for a form of cognition that is judged at the meta-level to be more productive. This interplay between cognition and metacognition is the focus of theories and research on SRL (Winne, 1995a, 1995b, 1997, 2001, 2010a).

Self-Regulated Learning

Research on SRL has been vibrant for approximately 40 years (see Winne, in press). Hadwin and I (Winne & Hadwin, 1998; see also Dimmitt & McCormick, 2012) proposed a model of SRL that unfolds over four loosely sequential and recursive phases. In the first phase, the learner searches the external environment plus her memory to identify conditions that may have bearing on a task she is about to begin. This information represents context as the learner perceives it. In phase two, the learner forges goals for working on the task and drafts plans to approach those goals. Phase three is where work begins on the task itself.

Throughout all three of these phases, the self-regulating learner monitors information about (a) how learning was enacted using cognitive operations (e.g., SMART processes), study tactics and learning strategies; and (b) changes in the fit of internal and external conditions to various standards. For example, after mapping external conditions, the learner may judge she has only moderate efficacy and forecasts she will need help. Searching her store of knowledge and judging she is not very well equipped for this task, she becomes slightly anxious and sets a goal to seek help from others. A plan is designed to seek help that is either just in case, e.g., texting a friend to see if he will be in the library during study hall in the afternoon; or just in time, e.g., texting her friend at the moment need arises. Each plan, not yet enacted, is monitored for whether it seems it will sufficiently allay her anxiety. If not, an adaptation may be constructed.

Phase four of Winne and Hadwin’s model of SRL is where learners elect to make substantial changes in their approach to future tasks. This process reflects what Salomon and Perkins (1989) called forward-reaching transfer. Changes learners can make take two main forms: large shifts in standards they use for metacognitive monitoring in a particular context, and significant rearrangements of links between the results of metacognitive monitoring and actions taken (i.e., learning tactics and strategies) conditional on the outcome of metacognitive monitoring. In terms of a production system, this modifies if A, then B to become if A, then C.

Facets of Tasks in SRL: The COPES Model

At every phase of SRL, learners engage in micro, meso or macro tasks. Each task can be modeled using a five-part schema that marks conditions, operations, products, standards and evaluations—the COPES model (Winne, 1997). Conditions are elements the learner perceives could affect work on the task. Internal conditions are characteristics the learner brings to a task, such as knowledge about the topic, study tactics and learning strategies, motivational orientation and epistemological beliefs (see Muis & Singh, 2018/this volume). External conditions are features in the surrounding environment the learner perceives could influence internal conditions of either of two other facets of tasks, operations and standards. Operations work on information, as noted in the description of the basic SMART operations and composite operations, such as study tactics and learning strategies. Every operation generates products. Some products relate to the goal of the task, for example, ordering by date the French monarchs during the Renaissance or finding the intercept(s) of a quadratic function. Other products are a result of metacognition, such as judging whether it is worth the effort to construct a mnemonic for elements in the actinide series versus just memorizing them. Products are evaluated using standards. The set of standards operationalizes the goal of carrying out operations to produce a particular product. For example, a high-quality first-letter mnemonic (a) includes one letter for each item to be identified, (b) is pronounceable and (c) is memorable (e.g., “A SMART student COPES well with tasks”).

Qualities of SRL

Throughout all the phases of SRL, learners’ motivations and emotions are influential (see Efklides, Schwartz, & Brown, 2018/this volume). These arise automatically as learners engage cognitive and metacognitive processes (Buck, 1985; Zajonc, 1980). Motivational and emotional states play three important roles. First, they are internal conditions the learner surveys in phase one of self-regulated work. Second, standards used in metacognitive monitoring can refer to the presence of, or level of, motivations and emotions. Third, learners can set goals to regulate motivation and emotion in the same general way as they regulate cognition. In this case, motivations and emotions become objects manipulated when learners exercise tactics and strategies via metacognitive control.

A further critical theoretical account about SRL concerns the essence of self- regulation. The learner is in charge. Whatever supports or constraints exist as external conditions and whatever may be the character of an intervention designed to promote elements of SRL, the learner is the decision maker and the actor. Were it otherwise, by definition, regulation would not be self-regulation but other-regulation (see Hadwin, Järvelä, & Miller, 2018/this volume; Winne, 2015).

A corollary of this axiom is that learners engaged in SRL are the principal investigators in a personal program of research. They investigate and mobilize ever more effective tactics and strategies that help to achieve goals. Importantly, the standards they use to judge effectiveness of tactics and strategies are theirs, as are the goals they set. These may or may not match an instructor’s, tutor’s or group mate’s goals.

Each SRL event is a potential experiment. From this perspective, learners are learning scientists. Like “certified” learning scientists, learners gather and analyze data to feed evolving theories about why their approaches to learning are more or less successful. This is challenging scientific work owing to the multivariate nature of the learning environment and difficulties people encounter with scientific reasoning (Winne, 1997, 2010a). Learners need help with at least three main tasks: (a) gathering reliable data about how they enacted learning and associating those data with effects, (b) access to tactics and strategies for learning that can be available to metacognitive control and (c) opportunity to practice newer tactics and strategies to bring them to the status of automated skills. Woven throughout all this, learners need help in applying the scientific method to develop valid interpretations about their experiments in learning.

Research on Cognitive and Metacognitive Processes

Because SRL works at the meta-level to modulate knowledge, skills, motivation and emotion at the object level, this chapter cannot do full justice to the range of research on cognitive and metacognitive processes in SRL. Select topics and select research are included here about effective study strategies and factors bearing on learners’ metacognition.

Are Study Strategies Effective?

Metamemory refers to what a learner knows about processes involved in learning and memory, including beliefs learners have about tactics and strategies for learning. It appears undergraduates, at least, are quite undereducated about these matters. In response to an open-ended question about strategies used to study, Karpicke, Butler and Roediger (2009) reported the most frequently cited study tactic was reading one’s notes or the textbook. McCabe (2011) investigated learners’ predictions about the utility of six factors that have general empirical support in learning science as affecting learning: dual coding (i.e., it is generally better to study material presented in multiple modalities than a single modality), animation overload (i.e., it is generally better to study static material), seductive details (i.e., high interest but less relevant details can rob resources from learning key content), the testing effect (i.e., memory is generally improved by testing knowledge vs. restudying it), the spacing effect (i.e., memory is generally better when studying sessions are distributed over time vs. cramming) and the generation effect (i.e., creating a personal representation of content generally improves memory). For the generation effect, 50% of undergraduates correctly endorsed it. Endorsements of the more productive approach to learning for the first five items in this set ranged from 10% to 38%.

While undergraduates may know little about how to study as recommended by evidence from learning science, this can be remedied. A large variety of studies have shown learners can be taught or “pick up” without much training a variety of specific study tactics and learning strategies that benefit learning outcomes in the lab and in authentic settings (e.g., Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). Perhaps the most authentic of these studies is Tuckman and Kennedy’s (2011). They taught a diverse group of undergraduates in a large midwestern U.S. university a collection of generic strategies for managing one’s studies, taking responsibility for learning, planning and asking questions about learning activities and assignments, and seeking and using feedback about learning, a category loosely matching engaging in metacognitive monitoring and metacognitive control. Two notable features of this study are that the course was lengthy (a semester) and the intervention directly addressed motivational as well as cognitive features of undergraduates’ experiences as learners. In comparison to a carefully matched comparison group in this quasi-experiment, students taking the learning strategies course prospered. Odds of continuing to enroll (i.e., retention) were more than six times greater for students who took the learning strategies course. Grade point averages for learning strategies course takers and non-course takers both showed a decline across semesters but learning strategies course takers were statistically identified as having a higher GPA than students in general.

Tuckman and Kennedy’s study sparks optimism. The balance of work in this area indicates learners taught learning tactics and strategies experience quite variable but moderately positive results (Donker, de Boer, Kostons, van Ewijk, & van der Werf, 2014; Winne, 2013). Two features of strategy instruction boost chances of benefits: increase opportunities for metacognitive monitoring using standards that focus on both cognitive processes and products, and enhance feedback to address not only products but also cognition and metacognition directly (Schraw & Gutierrez, 2015; Winne, 1985). But learners need additional support. Object-level cognitive processes designed to benefit learning give rise to metacognitive experiences that learners often misconceive, and learners’ exercise of metacognitive control based on these misconceptions can undermine learning. Metacognitive knowledge and interpretations of metacognitive experiences are important. The next section examines this topic.

Factors Bearing on Learners’ Metacognition About Tactics and Strategies

To oversimplify the complex recursive unfolding of processes and their products that fuel updates across the timeline of a task, consider a snapshot of work at a moment in time—a state. Resources the learner has available in a state of work are the contents of working memory plus whatever information is perceived about the external environment. Importantly, factors that learners scan internally and externally are also fundamentally shaped by memory and its contents, i.e., metacognitive knowledge (see Muis & Singh, 2018/this volume). Everyone, including learners, faces challenges of memory. Those challenges sometimes prevail.

Commonly, learners are overconfident about what they know. As a consequence, they often elect not to restudy content when it would benefit them. Several factors are at play as reviewed by Bjork, Dunlosky and Kornell (2013). First, when material appears easy to grasp, this fluency in encoding appears to mislead learners to forecast that the studied material will be easily recalled. Unluckily, there is only a small correlation between encoding fluency and recall. Second, material that is perceptually emphasized (e.g., by priming memory with keywords or by styling type font such as italicized terms) is judged easier to learn. It is not. Third, inducing relationships, such as the characteristics of art and artists’ names, may be perceived easier when content is presented in blocked fashion, such as all the art by one artist, then all the art by the next artist. Like the false sense of encoding fluency, the ease of inducing relationships when content is blocked also leads learners to judge they have learned better. A mixed presentation produces better outcomes.

The story here is truly meta. Learners observe meta-features about the content they study and their experience as they study that content. What might be considered “obvious” cues about the quality of learning are not inherently probative (see Koriat, 2016). There are remedies, commonly grouped under the apt label of desirable difficulties (Bjork & Bjork, 2011). The general form of a desirable difficulty is to engage the learner in a kind of object-level cognitive processing that the learner might usually avoid because it appears unnecessarily difficult. But there are cases where the very impairment to performance that gives rise to this perception of unnecessary difficulty in the short-term is a benefit to longer-term recall. A prime example is distributed practice where the schedule for reviewing previously studied content spreads out over a timeline rather than reviewing immediately after or very close in time to a first study session. Laboring to recall prior material that is not “at hand” enhances memory for that content (e.g., Roediger & Butler, 2011). But learners prefer material to be blocked or massed, setting a stage for overconfident judgments of retrievability because material they study repeatedly in one session is recognized rather than having to be retrieved. Recognizing material is easier but less productive.

Overall, desirable difficulties engage learners in SMART processing at the object level that otherwise they would metacognitively choose not to carry out. While it is good that learners are metacognitively engaged in monitoring learning experiences, the metacognitive control they exercise, what they choose to do, is often subpar.

Motivational factors are at play beyond a simple preference to avoid what is judged to be unnecessary work. The fulcrum may be hindsight bias and several associated motivational factors (see Bernstein, Aßfalg, Kumar, & Ackerman, 2015). The gist of hindsight bias is a tendency to judge that a previous state was relatively predictable when, at the time that state was occurring, it was objectively not predictable; or vice versa. Hindsight bias is nicely reflected by a less formal label, the “I knew it all along” effect. For example, a learner metacognitively chooses to study material using effortful object-level processes. On later receiving a poor grade, the learner reasons: “No matter how hard I would have studied, the test was so difficult I was bound to fail anyway.” This metacognitively biased attribution to what is afterward perceived an uncontrollable factor—the test—is an interpretation that protects self-worth. But it is a mistake because a poor outcome on the test was not a dependable prediction at the time of studying. Looking through a motivational lens, the learner need not accept blame for unproductive SRL during the study phase. And what blame there is to assign was offloaded to an external uncontrollable factor, the instructor’s unduly difficult test (Weiner, 2010). The upshot is less incentive in future studying sessions to exercise metacognitive control that activates effortful object-level processes.

In sum, metacognitive processes are informed by and constrained by metacognitive knowledge (Winne, 1995b). Knowledge in this sense is broadly interpreted to refer to the contents of memory that supply standards for metacognitive monitoring: beliefs and motivational explanations for results, as well as misconceptions (e.g., Winne & Marx, 1989) and learned tactics and strategies that fuse SMART processes with other knowledge about how to operate on information at the object level. An important implication is that learners engaging in productive SRL need a wide scope of metacognitive knowledge that is both valid and useful in the contexts of their diverse learning activities.

Vectors for Future Research on SRL

Multiple Channels for Observing SRL

Beginning readers are noticeably methodical when they decode a multisyllabic word with a “confusing” cluster of consonants, like “highway.” With extensive practice, this process becomes automated. The accomplished reader is practically unaware of decoding processes. The same is true of metacognitive processes. For a particular learner, instances of metacognitive monitoring and metacognitive control are commonly “submerged” from the learner’s ready inspection because the learner has developed automated recognition for whether a profile of features matches a standard profile of features. Similarly, the link between a judgment rendered by metacognitive monitoring and the choice identified by metacognitive control may similarly be automated and, thus, escape inspection.

Developing instrumentation to shine light on automated SRL processes has burgeoned in recent decades. The latest work strives to synthesize a “whole picture” of SRL grounded in data gathered in real time across multiple channels such as on-the-spot think-aloud reports (Greene, Deekens, Copeland, & Yu, 2018/this volume), click-stream data generated as learners use features in software (i.e., back buttons, search boxes; (Biswas, Baker, & Paquette, 2018/this volume), and eye gaze data and physiological measures (Azevedo, 2015; Azevedo, Taub, & Mudrich, 2018/this volume). Daunting challenges include: merging data across differing time scales, identifying robust indicators of object- and meta-level cognition and taming significant variability that arises across the timeline of a task and between tasks. Recent work on educational data mining (see Winne & Baker, 2013; Biswas, Baker, & Paquette, 2018/this volume) will be valuable in this work. Success in this methodological sector of research on SRL is essential in order to build a platform of learning science that not only advances the field but allows rigorous tests that can responsibly guide practice.

Motivation and Options

Today’s arena of theories of motivation is vibrant and diverse (Schunk, Meece, & Pintrich, 2014). Each offers perspective about how action and affect arise, and how consequences shape future choices. As noted earlier, cognitive and metacognitive processes in SRL are fundamentally deliberative; this is the purpose of metacognitive monitoring. As agents, learners exercise choices. Even automated routines embed within them motivational features that were deliberative at an earlier phase when the routine was becoming automated.

A significant challenge for research on SRL is characterizing motivation as a dynamic variable across the timeline of work on a task, and across tasks. The vast majority of motivation research samples very, very few states during a task and charts a very punctuated flow across states. Studies that offer temporal measures of motivation capture it at a coarse grain size. Trace methodologies (Winne, 2010b; see Bernacki, 2018/this volume) may offer an approach that fits SRL research. Traces are ambient data (e.g., logs of interactions with a computer) generated as learners do work they would normally do. Traces offer a sturdy platform for making inferences about underlying constructs such as metacognition and motivation. For example, learners who add marginalia to text like an exclamation point (!) or question mark (?) are tracing monitoring the text according to particular metacognitive standards—“This is important” and “This is incomprehensible.” These traces inherently reflect motivation-in-action.

An example is Zhou and Winne’s (2012) study. Among several other features for learners studying text online, they invited learners to click links. The links were phrases matching forms of achievement goal orientation (e.g., “Find more information about this” as a representation of mastery approach goal orientation). Their data showed two important findings: traces of motivational states differed from self-reports of goal orientations, and traces were better predictors of achievement. What needs work is conceptualizing motivation not only as an outcome but also as a standard learners use in metacognitive monitoring. Tracing standards representing motivational stances will be challenging because these standards likely fluctuate within a study session as well as across them.

Providing Opportunity for Metacognitive Monitoring and Metacognitive Control

As described throughout this volume, SRL is complex. At its hub are two expressions of metacognition: monitoring and control (Winne, 2001). Operationally defining metacognitive monitoring can take two general forms. The first and easier form is to observe ambient expressions of metacognitive control and, on that basis, infer meta-cognitive monitoring has occurred. A more complete inference about metacognitive monitoring requires additional evidence about the standard(s) used when a state was monitored. Suppose a learner annotates text by (a) drawing in the margin of a page a vertical line spanning several lines in one paragraph and (b) writing as a tag next to this line, “evidence?” This 2-part trace operationally reflects an instance of metacognitive control. It supplies sturdy ground for inferring the learner was monitoring the text using a schema for argumentation and identified the marked lines as failing the evidentiary feature of that schema. Research on SRL must provide opportunities for learners to reveal occasions where they exercise metacognitive control by a trace. Ideally, the trace identifies which information was monitored and what standard(s) the learner used in monitoring.

The second and more demanding path for operationally defining metacognitive monitoring affords a richer characterization of SRL at a cost levied on participants measured in time, effort and potentially interest in volunteering to participate in research. It is to train learners in several sets of standards, e.g., a schema for argumentation and a schema for explanation. Then, researchers would observe instances of metacognitive control and examine variance in the tags, e.g., “evidence?” vs. “scope?”

Learners’ exercise of metacognitive control in SRL is evident when learners vary their use of study tactics as a function of conditions at the start of work on a task or as conditions become updated over the course of work on a task. Gathering evidence about variance in metacognitive control within SRL requires, first, the internal condition that learners are approximately equally skilled in using more than one study tactic and, second, that external conditions afford the learner approximately equal opportunities to use any of the tactics. If either feature is absent or biased (i.e., learners are unequally skilled in the several study tactics or the environment biases the product of metacognitive monitoring that sets the stage for metacognitive control), researchers’ evidence of SRL will be truncated or biased. This is not a flaw per se but needs to be acknowledged in reporting research findings.

Implications for Practice

Because expressions of metacognition in SRL are complex, research upon which to base practice may appear piecemeal, failing to paint a whole picture. I recommend teachers and instructional designers conceptualize individual studies as offering heuristics for practices rather than unbending, must-do rules (Winne, 2017b). If this is a reasonable view to adopt for teachers who design instruction for learners, the same follows for learners who design learning for themselves as they practice SRL. A consequence is that learning to learn more effectively, the goal of SRL, will require two-way respect between learners and instructors. Each necessarily must experiment, and each should develop tolerance for well-intentioned yet less-than-optimal success.

The good news is there are promising heuristics for study tactics and SRL. An illustration is Michalsky’s (2013) study of a multi-component approach to studying scientific texts to increase scientific literacy. Learners in grade 10, other than those in a control group, were provided questions about learning in the midst of texts they studied. Questions addressed cognitive-metacognitive or motivational elements in each of four facets of work: comprehending the text, connecting ideas to prior experience, strategies for work and reflecting on the results of exercising metacognitive control. One group received only cognitively plus metacognitively focused questions, a second group received only motivationally focused questions and a third group received both. In addition to achievement data, the researcher gathered both questionnaire data reflecting aptitude-related SRL and think-aloud data reflecting event-related cognitive, metacognitive and motivational features. Embedding opportunities for learners to address metacognitive aspects of learning (i.e., cognition, metacognition and motivation) boosted scientific literacy relative to the control group. An important finding of Michalsky’s study was only the group with all three kinds of embedded questions—cognitive, metacognitive and motivational—elevated state-like views of SRL. As proposed by Panadero, Klug and Järvelä (2016), when students have greater opportunity to become aware of their processing, they have greater opportunity to adapt. In short, as straightforward a technique as explicitly inviting learners to consider how they learn can benefit achievement. But, as discussed throughout this chapter, SRL is a fusion of motivational and cognitive-metacognitive features (Bell & Kozlowski, 2008; Efklides, 2011). Students in Michalsky’s study changed their views of SRL when this fusion was part of their work.

If, as earlier described, learners are learning scientists, designs for instruction that support their research projects will need more than heuristically useful interventions, as illustrated in Michalsky’s study. They also need data about their learning that shines light on how they learn, and they need relief from pressures to cover over-stuffed curricula and to succeed at every bit of it in order to afford opportunities to experiment with learning without punishments. I offer several untested suggestions. First, leverage the power of software technologies to gather data about learning as an event (Winne, 2017a). Second, offer learners learning analytics, reports generated using trace and other conventional data (e.g., demographic, self-report, accumulating achievement) about how and what they studied, plus recommendations about how to productively adapt study routines (Winne, 2017b). Convey learning analytics in ways that encourage learners to “try out” adaptations to tactics and strategies they use to learn (Roll & Winne, 2015; Winne, 2017b). When (a) experimenting with learning becomes an accepted curriculum unto itself, (b) learners are motivated and feel safe to experiment with learning (e.g., Marzouk et al., 2016) and (c) excesses of overcrowded curricula where “everyone needs to know all of this” are pruned to make space for experimenting with learning, I predict productive SRL will have a much better chance to flourish.


Azevedo, R. (2015). Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educational Psychologist, 50 (1), 84–94.
Azevedo, R. , Taub, M. , & Mudrick, N. V. (2018/this volume). Understanding and reasoning about real-time cognitive, affective, and metacognitive processes to foster self-regulation with advanced learning technologies. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Bell, B. S. , & Kozlowski, S. W. J. (2008). Active learning: Effects of core training design elements on self-regulatory processes, learning, and adaptability. Journal of Applied Psychology, 93 (2), 296–316.
Bernacki, M. (2018/this volume). Examining the cyclical, loosely sequenced, and contingent features of self-regulated learning: Trace data and their analysis. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Bernstein, D. M. , Aßfalg, A. , Kumar, R. , & Ackerman, R. (2015). Looking backward and forward on hindsight bias. In J. Dunlosky & S. K. Tauber (Eds.), The Oxford handbook of metamemory (pp. 289–304). Oxford: Oxford University Press.
Biswas, G. , Baker, R. S. , & Paquette, L. (2018/this volume). Data mining methods for assessing self-regulated learning. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Bjork, E. L. , & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficul-ties to enhance learning. In M. A. Gernsbacher , R. W. Pew , L. M. Hough , & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). New York: Worth.
Bjork, R. A. , Dunlosky, J. , & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444.
Buck, R. (1985). Prime theory: An integrated view of motivation and emotion. Psychological Review, 92, 389–413.
Dimmitt, C. , & McCormick, C. B. (2012). Metacognition in education. In K. R. Harris , S. Graham , & T. Urdan (Eds.), APA educational psychology handbook. Vol 1: Theories, constructs, and critical issues (pp. 157–187). Washington, DC, US: American Psychological Association.
Donker, A. S. , De Boer, H. , Kostons, D. , van Ewijk, C. D. , & Van der Werf, M. P. C. (2014). Effectiveness of learning strategy instruction on academic performance: A meta-analysis. Educational Research Review, 11, 1–26.
Dunlosky, J. , Rawson, K. A. , Marsh, E. J. , Nathan, M. J. , & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14 (1), 4–58.
Efklides, A. (2011). Interactions of metacognition with motivation and affect in self-regulated learning: The MASRL model. Educational Psychologist, 46 (1), 6–25.
Efklides, A. , Schwartz, B. L. , & Brown, V. (2018/this volume). Motivation and affect in self-regulated learning: Does metacognition play a role? In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Greene, J. A. , Deekens, V. M. , Copeland, D. Z. , & Yu, S. (2018/this volume). Capturing and modeling self-regulated learning using think-aloud protocols. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Hadwin, A. , Järvelä, S. , & Miller, M. (2018/this volume). Self-regulation, co-regulation, and shared regulation in collaborative learning environments. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Karpicke, J. D. , Butler, A. C. , & Roediger, H. L. (2009). Metacognitive strategies in student learning: Do students practise retrieval when they study on their own? Memory, 17, 471–479.
Koriat, A. (2016). Processes in self-monitoring and self-regulation. In The Wiley Blackwell Handbook of judgment and decision making. Malden, MA: Wiley Blackwell.
Marzouk, Z. , Rakovic, M. , Liaqat, A. , Vytasek, J. , Samadi, D. , Stewart-Alonso, J. , Ram, I. , Woloshen, S. , Winne, P. H. , & Nesbit, J. C. (2016). What if learning analytics were based on learning science? Australasian Journal of Educational Technology, 32(6).
McCabe, J. (2011). Metacognitive awareness of learning strategies in undergraduates. Memory & Cognition, 39 (3), 462–476.
Michalsky, T. (2013). Integrating skills and wills instruction in self-regulated science text reading for secondary students. International Journal of Science Education, 35 (11), 1846–1873.
Muis, K. R. , & Singh, C. (2018/this volume). The three faces of epistemic thinking in self-regulated learning. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Nelson, T. O. , & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The Psychology of Learning and Motivation, 26, 125–141.
Panadero, E. , Klug, J. , & Järvelä, S. (2016). Third wave of measurement in the self-regulated learning field: When measurement and intervention come hand in hand. Scandinavian Journal of Educational Research, 60, 723–735.
Roediger, H. L. , & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Science, 15, 20–27.
Roll, I. , & Winne, P. H. (2015). Understanding, evaluating, and supporting self-regulated learning using learning analytics. Journal of Learning Analytics, 2 (1), 7–12.
Salomon, G. , & Perkins, D. N. (1989). Rocky roads to transfer: Rethinking mechanisms of a neglected phenomenon. Educational Psychologist, 24, 113–142.
Schraw, G. , & Gutierrez, A. P. (2015). Metacognitive strategy instruction that highlights the role of monitoring and control processes. In A. Peña-Ayala (Ed.), Metacognition: Fundaments, Applications and Trends, Intelligent Systems Reference Library 76, 3–16. doi: 10.1007/978-3-319-11062-2_1
Schunk, D. H. , Meece, J. R. , & Pintrich, P. R. (2014). Motivation in education: Theory, research, and applications (4th ed.). Boston: Pearson.
Thiede, K. W. , & de Bruin, A. B. H. (2018/this volume). Self-regulated learning in reading. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed.). New York: Routledge.
Tuckman, B. W. , & Kennedy, G. J. (2011). Teaching learning strategies to increase success of first-term college students. The Journal of Experimental Education, 79 (4), 478–504.
Weiner, B. (2010). The development of an attribution-based theory of motivation: A history of ideas. Educational Psychologist, 45 (1), 28–36.
Winne, P. H. (1985). Steps toward promoting cognitive achievements. Elementary School Journal, 85, 673–693.
Winne, P. H. (1995a). Inherent details in self-regulated learning. Educational Psychologist, 30, 173–187.
Winne, P. H. (1995b). Self regulation is ubiquitous but its forms vary with knowledge. Educational Psychologist, 30, 223–228.
Winne, P. H. (1997). Experimenting to bootstrap self-regulated learning. Journal of Educational Psychology, 89, 397–410.
Winne, P. H. (2001). Self-regulated learning viewed from models of information processing. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theoretical perspectives (2nd ed., pp. 153–189). Mahwah, NJ: Lawrence Erlbaum Associates.
Winne, P. H. (2010a). Bootstrapping learner’s self-regulated learning. Psychological Test and Assessment Modeling, 52, 472–490.
Winne, P. H. (2010b). Improving measurements of self-regulated learning. Educational Psychologist, 45, 267–276.
Winne, P. H. (2011). A cognitive and metacognitive analysis of self-regulated learning. In B. J. Zimmerman and D. H. Schunk (Eds.), Handbook of self-regulation of learning and performance (pp. 15–32). New York: Routledge.
Winne, P. H. (2013). Learning strategies, study skills and self-regulated learning in postsecondary education. In M. B. Paulsen (Ed.), Higher education: Handbook of theory and research (Vol. 28, pp. 377–403). Dordrecht: Springer.
Winne, P. H. (2015). What is the state of the art in self-, co- and socially shared regulation in CSCL? Computers in Human Behavior, 52, 628–631.
Winne, P. H. (in press). The trajectory of research on self-regulated learning. In T. Michalsky (Ed.), Yearbook of the National Society for the Study of Education. Vol. 116: Self-regulated learning: Conceptualizations, contributions, and empirically based models for teaching and learning. Chicago, IL: National Society for the Study of Education.
Winne, P. H. (2017a). Leveraging big data to help each learner upgrade learning and accelerate learning science. Teachers College Record, 118 (13), 1–24.
Winne, P. H. (2017b). Learning analytics for self-regulated learning. In G. Siemens & C. Lang (Eds.), Handbook of learning analytics (pp. 241–249). Beaumont, AB: Society for Learning Analytics Research.
Winne, P. H. , & Baker, R. S. (2013). The potentials of educational data mining for researching metacognition, motivation and self-regulated learning. JEDM-Journal of Educational Data Mining, 5 (1), 1–8.
Winne, P. H. , & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker , J. Dunlosky , & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Mahwah, NJ: Lawrence Erlbaum Associates.
Winne, P. H. , & Marx, R. W. (1989). A cognitive processing analysis of motivation within classroom tasks. In C. Ames and R. Ames (Eds.), Research on motivation in education (Vol. 3, pp. 223–257). Orlando, FL: Academic Press.
Zajonc, R. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151–175.
Zhou, M. , & Winne, P. H. (2012). Modeling academic achievement by self-reported versus traced goal orientation. Learning and Instruction, 22, 413–419.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.