Two Kinds and Four Sub-Types of Misconceived Knowledge, Ways to Change it, and the Learning Outcomes

Authored by: Michelene T. H. Chi

International Handbook of Research on Conceptual Change

Print publication date:  June  2013
Online publication date:  July  2013

Print ISBN: 9780415898829
eBook ISBN: 9780203154472
Adobe ISBN: 9781136578212

10.4324/9780203154472.ch3

 

Abstract

Learning of complex material, such as concepts encountered in science classrooms, can occur under at least two different conditions of prior knowledge. In one case, a student may have some prior knowledge of the to-be-learned concepts, but it is incomplete. In this incomplete knowledge case, learning can be conceived of as gap filling, and Carey (1991) had referred to this case of knowledge acquisition as the enriching kind. In a second case, a student may have already acquired some naive ideas, either in school or from everyday experiences, that are “in conflict with” the to-be-learned concepts (Vosniadou, 2004). It is customary to assume that the naive “conflicting” knowledge is incorrect, by some normative standard. Thus, learning in this second case is not adding missing knowledge or gap filling; rather, learning is changing naive conflicting knowledge to correct knowledge. This chapter focuses on this conceptual change kind of learning.

 Add to shortlist  Cite

Two Kinds and Four Sub-Types of Misconceived Knowledge, Ways to Change it, and the Learning Outcomes

Conceptual Kind of Learning

Learning of complex material, such as concepts encountered in science classrooms, can occur under at least two different conditions of prior knowledge. In one case, a student may have some prior knowledge of the to-be-learned concepts, but it is incomplete. In this incomplete knowledge case, learning can be conceived of as gap filling, and Carey (1991) had referred to this case of knowledge acquisition as the enriching kind. In a second case, a student may have already acquired some naive ideas, either in school or from everyday experiences, that are “in conflict with” the to-be-learned concepts (Vosniadou, 2004). It is customary to assume that the naive “conflicting” knowledge is incorrect, by some normative standard. Thus, learning in this second case is not adding missing knowledge or gap filling; rather, learning is changing naive conflicting knowledge to correct knowledge. This chapter focuses on this conceptual change kind of learning.

Although this definition of conceptual change appears straightforward, learning via conceptual change entails several complex, non-transparent, and interwoven issues. The existence of decades of research on conceptual change speaks to the complexity of these issues. We pose some of the key non-transparent questions as follows: (a) In what ways does naïve knowledge “conflict with” the to-be-learned materials? That is, why is conflicting knowledge misconceived and not merely incorrect? We will address the difference between incorrect knowledge versus misconceived conflicting knowledge. (b) Is misconceived knowledge always resistant to change, or is some misconceived knowledge more easily changed? (c) How should instruction be designed to promote conceptual change? This chapter hopes to add clarity to some of these questions by offering a theoretical framework that lays out two different kinds of conceptual change, with two subtypes for each kind, as a function of how conflicting knowledge is defined. Furthermore, we postulate the processes by which such conflicting knowledge can be changed, and speculate on the kind of instruction that might achieve such change.

Four Types of Misconceived Knowledge and How They Might be Changed

Superficially, the notion of misconceived knowledge seems easy to define objectively, in that it is incorrect from the perspective of the correct to-be-learned material. However, characterizing misconceived knowledge as incorrect is simplistic because it cannot explain why misconceived knowledge is often so resistant to change. To understand why misconceived knowledge is resistant to change, we propose that there are two kinds of incorrectness: (1) knowledge can be “inaccurate” compared to correct information or to reality, such as in having an incorrect value on an appropriate property or dimension, or (2) knowledge can be “incommensurate” with correct information in not having the appropriate dimensions. “Dimension” is used here to refer to a plausible property of a concept in general, rather than the specific value on a dimension. For example, living things have the capacity (or dimensions) to “move on their own volition,” “be responsive,” and “reproduce,” whereas artifacts (non-living things) cannot even have these dimensions. In contrast the value of a dimension is a specific feature or attribute. For the dimension of “reproducing,” the specific attribute for fish is to lay eggs, while the specific attribute for dogs is to give birth to live young. Thus, to say that a whale is the same size as a salmon is inaccurate, whereas to say that a whale is a fish like a salmon is incommensurate.

Based on these two kinds of incorrectness (inaccurate and incommensurate), conflicting knowledge can be examined in terms of four sub-types, in terms of representations of knowledge that are commonly discussed in the cognitive science literature, such as individual propositions or statements, mental models, categories, and schemas. Corresponding to these four types of representations, we refer to prior conflicting knowledge as either false beliefs (at the statement level), flawed mental models (at the mental model level), category mistakes (at the categorical level), or missing schemas (at the schema level). False beliefs and flawed mental models kinds of conflicting knowledge are “inaccurate,” whereas category mistakes and missing schemas kinds of conflicting knowledge are “incommensurate.” Although our framework does not necessarily commit to any notions of hierarchy in the grain sizes of these representations, what is critical is our proposal that the grain size at which conflict is defined (between incorrect knowledge and the to-be-learned correct material) determines how instruction should be designed to change misconceptions.

Using these four different representational formats, we examine the key questions of: In what ways do students’ naïve ideas conflict with the to-be-learned materials, the ease with which such conflicting knowledge can be changed, and the type of instruction or confrontation that might trigger conceptual change. In the discussion below, our examples will be drawn primarily from science domains for three reasons. First, it is relatively easy to agree on what is considered correct or normative scientific information, and thus to contrast it with misconceived knowledge, which, by definition, implies prior knowledge that is incorrect as compared to some normative or scientifically based information. Second, misconceptions historically were recognized largely in science domains. Third, we draw our examples from science domains for which we have some data, primarily taken from concepts such as the human circulatory system and diffusion. For the headings of the three sections below, the first segment serves as a label for how knowledge is misconceived, the second segment describes the kind of conceptual change that can occur, and the third segment refers to the kind of confrontation and/or instruction that may produce conceptual change.

False Beliefs: Belief Revision from Refutation

Students’ naive knowledge can be represented at the grain size of a single idea, corresponding more or less to information specified in a single sentence or statement. We will refer to single ideas as “beliefs,” and, when they are incorrect, as false beliefs. With respect to the human circulatory system, false beliefs might be knowing that “the heart is responsible for re-oxygenating blood” or that “all blood vessels have valves.” Such false beliefs are incorrect because it is the lungs that are responsible for oxygenating blood and only veins but not arteries have valves (Chi, de Leeuw, Chiu, & LaVancher, 1994; Chi & Roscoe, 2002). So in what sense do these false beliefs conflict with correct information? One can think of understanding a system (such as the circulatory system) as forming a complete schema or mental model with slots (or dimensions) and features/values for each slot/dimension, such as that there is an organ (or an agent) that is responsible for oxygenation. That is, having an agent as the cause of oxygenation is the dimension, and the specific organ is the property on that dimension. Thus, the false belief that “the heart is responsible for re-oxygenating blood” is compatible with the dimension of having an organ as the responsible agent. Therefore, the naïve belief about the heart as the responsible agent is simply false on the same dimension, in the sense that it is inaccurate or contradictory. The correct knowledge is that it is the lungs and not the heart that oxygenate blood.

If false beliefs and correct information contradict each other on the same dimension, then one would expect that designing instruction that is targeted at refuting false beliefs might succeed at correcting them, resulting in belief revision. It appears that this is true (Broughton, Sinatra, & Reynolds, 2007; Guzzetti, Snyder, Glass, & Gamas, 1993). That is, false beliefs for some topics can be corrected when learners are explicitly confronted with the correct information by direct contradiction or explicit refutation, and even implicit refutation. Direct refutation would be saying something in the text such as, The heart does not oxygenate blood, and implicit refutation may simply be not mentioning the heart as oxygenating blood, and only mentioning the lungs as oxygenating blood. We have reported evidence obtained by de Leeuw (in Chi & Roscoe, 2002) for the success of both explicit and implicit types of refutations. The successful outcome of refutation can be called belief revision (see column 1, Figure 3.1).

Four types of conflicting knowledge, ways to change it, and the outcome

Figure 3.1   Four types of conflicting knowledge, ways to change it, and the outcome

However, there are many other incorrect beliefs in other domains that are not so readily revised by refutation, even though they can be stated at the grain size of a single idea. Consider, for example, conflicting beliefs such as a thrown object acquires or contains some internal force or coldness from the ice flows into the water, making the water colder. Although students can readily learn by adding new beliefs about “internal force,” such as the equation for its relation to mass and acceleration, the definition of acceleration, and so on, these newly added beliefs cannot correct a student’s conflicting belief that a thrown object acquires or contains some internal force. Moreover, such conflicting beliefs cannot be easily denied or refuted by contradiction. For example, stating that “a thrown object does not acquire or contain internal forces,” or stating that “a thrown object contains some other kind of forces” will not succeed in helping students achieve correct understanding because these two examples of refutation contradict the conflicting beliefs on the same dimension, whereas the conflicting belief is incorrect in that it should not have that dimension at all; that is, the incorrect dimension and the correct dimension are incommensurate. That is, it does not make sense to talk about an object as containing or not containing forces because forces cannot be contained in objects. Thus, some conflicting beliefs are not incorrect in the false or inaccurate sense, therefore they cannot be explicitly or implicitly refuted. Rather, they are incorrect in the incommensurate sense, to be addressed in a later section below.

Flawed Mental Models: Mental Model Transformation From Accumulation of Belief Revisions

An organized collection of individual beliefs can be viewed as forming a mental model. A mental model is an internal representation of a concept (such as the earth), or an interrelated system of concepts (such as the circulatory system) that corresponds in some way to the external structure that it represents (Gentner & Stevens, 1983). Mental models can be “run” mentally, much like an animated simulation, to depict changes and generate predictions and outcomes, such as the direction of blood flow. A mental model can also have some underlying assumptions, in much the same way that an external model can.

A mental model can be so sparse and incomplete that learning would begin by adding and filling-in gaps in knowledge. However, adding and gap-filling a mental model would not constitute conceptual change. Therefore, in what other ways can mental models be incorrect so that learning is the conceptual change kind and not merely the enriching kind? Mental models can conflict with the normative correct model in being flawed. We define flawed to mean that the core assumptions of the flawed model are not only incorrect but also coherent in that they do not contradict each other, even though they may contradict the assumptions of the correct model. Moreover, students can use their naïve but coherent flawed mental model to offer similar and consistently incorrect explanations and predictions in response to a variety of questions. Thus, a flawed mental model is an incorrect naïve model that has coherence among its assumptions and consistency in its predictions and explanations.

We can capture the structure of a student’s flawed mental model by examining the pattern and consistency of the generated explanations and predictions (Chi, 2000; Chi, Slotta, & de Leeuw, 1994; Vosniadou & Brewer, 1992, 1994). The accuracy of the flawed mental model can be further validated by predicting and testing how that student will respond to additional questions. For example, about half of the participants in our studies had an initial “single-loop” model of the human circulatory system. According to this flawed model, blood goes to the heart to be oxygenated, then it is pumped to the rest of the body, then back to the heart. (In contrast, the correct “double-loop” model has two paths. One path leads from the heart to the lungs, where blood is oxygenated before returning to the heart. The second path leads from the heart to the rest of the body and back to the heart.) In order to confirm that our assessment of the flawed single-loop model is accurate, we can design additional questions to see if students will respond as expected, on the basis of the single-loop model.

In what way does a flawed single-loop model conflict with the correct double-loop model? We propose that the flawed model conflicts with the correct model in that their core underlying assumptions contradict each other. For example, the three fundamental assumptions underlying a flawed single-loop model are that it is the heart that oxygenates blood, therefore there is only one loop, and that lungs serve no special purpose other than as a destination to which blood has to deliver oxygen. In contrast, the correct double-loop model holds three contradictory assumptions, that it is the lungs that oxygenate blood, that there are two loops, and that lungs play an important role as the site of oxygenation.

These different core assumptions result in different predictions about where blood goes after it leaves the heart, different explanations with respect to where blood is oxygenated, and different elements in terms of whether or not lungs play an important role in oxygenation. Thus, in an alternative way to characterize the differences in the underlying assumptions of the two models, one could instead say that two models are “in conflict with” each other because they (a) make different predictions, (b) generate different explanations, and (c) use different elements in their explanations. Notice that these criteria of conflict – different predictions, different explanations containing different elements – are the ones mentioned by Carey (1985) as compatible with the notion of incommensurate from the philosophy of science. In our framework here, we propose that these two conflicting models are not incommensurate because their underlying assumptions contradict each other on the same dimensions, even though the different assumptions do generate different predictions, explanations, and elements. Instead, we would reserve the term incommensurate for knowledge that is “in conflict” either laterally or ontologically, to be discussed in a following section.

Likewise, Vosniadou and Brewer (1992) have shown that young children have flawed mental models of the earth, such as a flattened square disk model. Based on what children say, one could infer that the fundamental assumption underlying a flattened disk model is that the shape of the earth is flat and finite in size, therefore predictions from such a “flat earth” model would be that one should look down to see the earth and that there is an edge from which people can potentially fall off. In short, flawed mental models are coherent in the sense that their underlying assumptions do not contradict each other, and consistent in that students retrieve and use them repeatedly to answer questions and make predictions, allowing researchers to capture the structure of their mental models by analyzing the systematicity in the pattern of their responses (see also McCloskey, 1983; Samarapungavan & Wiers, 1997; Vosniadou & Brewer, 1992; Wiser, 1987). Thus, a flawed mental model is “in conflict” with the correct model in the sense that the two models hold different assumptions, thus generating different predictions and explanations.

We refer to successful modification of a flawed mental model as mental model transformation. But how should we design instruction to induce mental model transformation? There are three ways. First, one could refute many false beliefs in the same way one would refute a single false belief, as discussed in the previous section. Cumulatively, the many belief revisions can change the flawed model to the correct model. A second method is to confront the naïve flawed model holistically. And a third method might be to refute the basic assumptions. There is scant evidence supporting these instructional approaches and they are briefly described next.

Accumulation of Many Individual Belief Revisions

Although we have described conflicting mental models at the mental-model level (such as a flat earth vs. a spherical earth and a single-loop vs. a double-loop), traditional instruction typically consists of a description of the correct model one sentence at a time, ignoring what individual students’ flawed models are. This means that a learner’s flawed model is confronted with a description of the correct model presented one sentence at a time, such that each sentence can either refute (explicitly or implicitly) an existing belief or not, as discussed in the preceding section on belief revision.

From the perspective of a mental model, there are two possible outcomes when instruction is presented sentence-by-sentence. In the first case, information presented in a given sentence or sentences may not refute (explicitly or implicitly) any of the learner’s prior beliefs. Instead, the information might be new or more elaborate than what the learner knows. In such a case, the learner can assimilate by embedding or adding the new information from the sentences into her existing flawed model, so that her mental model is enriched, but continues to be flawed. For example, in the case of a single-loop flawed model, learners assume that blood from the heart goes to the rest of the body to deliver oxygen. Such models lack the idea that blood also goes to the lungs, not to deliver oxygen but to receive oxygen. Upon reading a sentence such as “The right side [of the heart] pumps blood to the lungs and the left side pumps blood to other parts of the body,” students with a single-loop model may not find it to contradict any beliefs in their flawed single-loop model, since they interpret the sentence to mean that the right side pumps blood to the lungs to deliver oxygen (rather than to receive oxygen), just as it does to the rest of the body. Therefore, even though at the mental-model level, the sentence conflicts with the learner’s flawed model, at the belief level, the sentence does not directly contradict the learner’s prior beliefs. Thus the learner does not perceive a conflict, and the new information is assimilated into the flawed model (Chi, 2000). In short, assimilation of new information occurs when a learner does not perceive a conflict at the belief level, even though from the researcher’s perspective, the new information is in conflict with the learner’s flawed mental model.

The second possible outcome of sentence-by-sentence instruction is that new information presented does refute a learner’s false beliefs and the learner recognizes the contradiction. Under such circumstances, as described in the preceding section, false beliefs that are explicitly or implicitly refuted (or ignored) do predominantly get revised (de Leeuw, 1993). The relevant question with respect to mental models is: Does the accumulation of numerous belief revisions eventually result in the transformation of a student’s flawed mental model to the correct model? The answer is yes, by and large.

According to our data, by reading and self-explaining a text passage about the human circulatory system, five of eight students (62.5%) with a prior flawed single-loop model transformed their flawed models to the correct model. Similarly, in Vosniadou and Brewer’s (1992) data, 12 of 20 children (60%) developmentally acquired the correct spherical model of the earth by the fifth grade, suggesting that their flawed mental models had undergone transformation. In short, again, for domains such as the circulatory system and the earth, coherently flawed mental models can be successfully corrected and transformed into the correct model, in over 60% of the population, with either relatively brief instruction from text (in the case of the circulatory system) or from general development and learning in school (in the case of the earth). Thus, conceptual change can be achieved in that conflicting flawed mental models can be transformed into the correct model when false beliefs within a flawed model are refuted by instruction and recognized by students as contradictions, so that the students can self-repair their flawed mental models (Chi, 2000) by revising their individual false beliefs.

Holistic Confrontation

Since flawed models and the correct model conflict at the mental-model level (flat earth vs. spherical earth; single-loop vs. double-loop), an instructional method based on holistic confrontation may induce successful model transformation. One way to design a holistic confrontation is to have students examine a visual depiction (e.g., a diagram) of their own flawed mental model, then compare and contrast it with a diagram of the correct model. We conducted a study using holistic confrontation in the following way. We pre-selected college students who had a flawed single-loop model of the circulatory system. Prior to reading a text passage about the circulatory system, we had them compare and contrast a diagram of the flawed single-loop model, which they agreed was their model, with the diagram of the correct double-loop model. We compared their learning gains with a control group who self-explained a diagram of the correct double-loop model only. We found the compare-and-contrast group to learn more than the self-explain group (Gadgil, Nokes, & Chi, 2011). So holistic confrontation might be a feasible way to achieve mental model transformation.

Refuting the Underlying Core Assumptions

A third method to transform a flawed mental model might be to refute the underlying assumptions. Although a flawed mental model is composed of many correct and many false beliefs, it appears that the core assumptions are the most critical in determining the extent to which a model is flawed. For example, across the various studies for which we have assessed students’ initial mental models of the circulatory system, we found 22 students (about 50%) to have the flawed single-loop model prior to instruction. The number of correct beliefs held by these 22 students varied widely, ranging from five to 35. For example, five students held between 10 and 15 correct beliefs, and four students held between 25 and 35 correct beliefs, yet the false beliefs are all embedded within the flawed single-loop model (see Figure 2 in Chi & Roscoe, 2002). This variability suggests that knowing and learning many correct beliefs does not guarantee successful transformation of a flawed mental model to the correct model, unless the false assumptions are revised. We know of no study that has attempted to refute the underlying assumptions directly. However, we do know that when the core assumptions are not refuted, then mental model transformation is not successful. For example, when young children are told that the earth is round, they then think that the earth is round and flat like a pancake. Thus, such instruction does not violate their core assumption that the earth is flat, therefore their revised mental model continues to be flawed (Vosniadou & Brewer, 1992).

To recap, students’ knowledge can consist of an interrelated system of false beliefs and correct beliefs, forming a flawed mental model. A flawed mental model can be said to conflict with a normative model if it is incorrect but coherent, in the sense that the underlying assumptions do not contradict each other, and the model consistently leads to incorrect predictions and explanations and contains elements different from the elements of a correct model. During instruction, when a specific sentence contradicts a false belief through explicit or implicit refutation, the accumulation of multiple belief revisions through refutations can lead eventually to a transformation of a flawed mental model to the correct model for over 60% of the students, either through direct instruction (in the case of the circulatory system) or from exposure to everyday experiences (as perhaps in the case of the earth). There may be other ways to design instruction, such as through holistic confrontation, or direct refutation of the underlying assumptions, that may encourage revision and reduce the likelihood of assimilation or adding to a flawed model, so that successful transformation can be achieved by all students. These ideas are shown in column 2 of Figure 3.1.

Category Mistakes: Categorical Shift from Awareness and Available Knowledge

The preceding sections described two types of conflicting knowledge for which conceptual change can be achieved relatively successfully, mainly because conflicting knowledge, as false beliefs and flawed mental models, is incorrect in being inaccurate. For these two types of conflicting knowledge, the incorrectness is a matter of inaccurate values on some appropriate dimensions or properties. Thus, refutations that contradict the values were successful at achieving conceptual change.

However, we have also mentioned above that there are numerous false beliefs about concepts such as force-and-motion or heat-and-temperature across a variety of domains for which conceptual change cannot be achieved. The robustness of such misconceptions has been demonstrated in literally thousands of studies, about all kinds of science concepts and phenomena, beginning with a book by Novak (1977) and a review by Driver and Easley (1978), both published over three decades ago. By 2008, there were over 8,000 publications describing students’ incorrect ideas and instructional attempts to change them (Confrey, 1990; Driver, Squires, Rushworth, & Wood-Robinson, 1994; Duit, 2008; Ram, Nersessian, & Keil, 1997), indicating that conceptual understanding in the presence of misconceptions remains a challenging problem. We propose the operational definition that certain misconceptions are robust and difficult to change because they have been mistakenly assigned to an inappropriate “lateral” category.

By a “lateral” category, we mean a category that is not hierarchically related to the category to which the concept belongs; instead it is parallel to the category to which the concept belongs. For example, artifacts can be considered a lateral category more or less “parallel” to living beings. Artifacts does not include the sub-categories of living beings, such as animals, reptiles, or robins. Instead, artifacts includes a different set of sub-categories, such as furniture and toys, and furniture includes sub-categories such as tables and chairs (see Figure 3.2). In short, artifacts and living beings can be thought of as occupying different branches of the same hierarchical tree (Thagard, 1990), in this case the Entities tree. We will refer to categories on different branches as “lateral” (vs. “hierarchical”) categories, and when lateral categories occur at about the same level within a tree, we will refer to them as “parallel.”

Distinct ontological trees: hierarchical and categroies within a tree and between trees

Figure 3.2   Distinct ontological trees: hierarchical and categroies within a tree and between trees

Although artifacts and living beings can both be sub-sumed under the higher-level category of objects and therefore share many higher-level dimensions of objects such as “having shape” and “can be thrown,” artifacts and living beings do have distinct and mutually exclusive dimensions as well. For example, living beings have the capacity to “move on their own volition,” be “responsive,” and “capable of reproducing,” whereas artifacts cannot.

Lateral categories can sometimes be referred to as ontologically distinct, in that they conflict by definition in kind and/or ontology. This means that conceptual change requires a shift across lateral or ontological categories. In order to support this claim that robust misconception is miscategorization across lateral/ontological categories, we have to characterize the nature of misconceptions and the nature of correct information to see whether they in fact belong to two categories that differ either in kind or in ontology, and thereby are “in conflict.”

The Lateral Categories to which Misconceptions and Correct Scientific Conceptions are Assigned

In order to characterize the nature of robust science misconceptions in terms of the category to which they have been mistakenly assigned, and also to characterize the nature of scientific conceptions in terms of the category to which they should be assigned, we analyzed students’ misconceptions for a variety of science concepts, consolidated researchers’ findings on misconceptions, and examined the history and philosophy of science literature, to induce the properties of both the mistaken category and the correct category. The two broad conflicting categories appear to be Entities (the misconceived view) and Processes (the correct view).

How are Entity-based misconceptions in conflict with scientific conceptions? Our initial conjecture was that scientists view many of these concepts as Processes rather than Entities. Processes can be conceived of as an ontological tree distinct from Entities, verifiable by the predicate test indicating the inappropriateness of some dimensions (see Figure 3.2). For example, heat or the sensation of “hotness” is the speed at which molecules jostle: the higher the speed, the “hotter” the molecules feel. Thus, heat is not “hot molecules” or “hot stuff” (an Entity), but more accurately, the speed of molecules (a Process).

Entities are objects or substances that have various attributes and behave in various ways (see Figure 3.2, the Entities tree). For example, a ball is a physical object with attributes such as mass, volume, shape, and behaviors such as bouncing and rolling. On the basis of our analyses across four science concepts – force, heat, electricity, and light – we arrived at the commonality that students mistakenly categorize these concepts as Entities (Reiner, Slotta, Chi, & Resnick, 2000). For example, many students view force as a substance kind of Entity that can be possessed, transferred, and dissipated. Students often explain that a moving object slows down because it has “used up all its force” (McCloskey, 1983), as if force were like a fuel that is consumed. Similarly, students think of heat as physical objects such as “hot molecules” or a material substance such as “hot stuff” or “hotness” (Wiser & Amin, 2001), as indicated by phrases such as “molecules of heat” or expressions such as “Close the door, you’re letting all the heat out.” The misconception is that heat can be “contained,” as if it were objects like marbles or substances like sand or water. In either case, heat is misconceived as a kind of Entity.

Misconceiving a concept such as force or heat as a kind of substance or Entity is serious because Entities and Processes essentially share no common dimensions. Entities have dimensions such as “can be contained,” “can have color,” and “can have volume,” while Processes have dimensions such as “occurring over time.” Thus, no Process, whether it’s an event such as a baseball game, a procedure such as baking a cake, or a state change such as melting, can have the dimensions of “having volume,” “having color,” or “can be contained,” whereas no Entity, such as a cake or a ball, can have the dimension of “having certain duration,” such as lasting two hours. (Of course, while Entities don’t occur through time, the Process of living for living beings can have duration.) Thus, each tree might be considered an “ontology,” (and its name will be capitalized) in that the trees have mutually exclusive dimensions. This is the definition of ontology used in this framework. Generally, philosophers use the term “ontology” to refer to a system of taxonomic categories for certain existences in the world (Sommers, 1971). However, in this chapter, we will refer to categories that occupy different trees as different “ontologically” (Chi, 1997, 2005), and categories that occupy parallel branches within a tree as different “laterally” or in “kind” (Gelman, 1988; Schwartz, 1977). Unlike categories on different trees, parallel categories within a tree do share overlapping dimensions (for example, the parallel categories artifacts and living beings share the dimensions of objects, such as “having shape” and “can be thrown” – see Figure 3.2 again).

We claim that this is why some misconceptions are so robust – because the naïve conceptions are miscategorized into an ontologically distinct tree. Such Entity-based misconceptions not only occur for a variety of concepts across a variety of disciplines, but they are held across grade levels, from elementary to college students (Chi et al., 1994), as well as across historical periods (Chi, 1992). They may even account for barriers that were only overcome by scientific discoveries (Chi & Hausmann, 2003). In short, robust misconceptions of the ontologically miscategorized kind are extremely resistant to change, so that everyday experiences encountered during developmental maturation and formal schooling seem powerless to change them, even when students are confronted with their misconception. (This is in contrast to the greater success with which flawed mental models can be transformed from everyday experiences or formal schooling, as described above.)

Telling Students to Shift Categories

How can instruction facilitate shifts across lateral or ontological categories? If misconceptions occur as the result of category mistakes, then instruction needs to focus at the categorical level. When students’ misconceived ideas conflict with correct ideas at the lateral category level, then refutation at the belief level will not promote conceptual change. This is because refutation at the belief level can only cause local revisions of the features/attributes/values of certain dimensions, whereas conceptual change of category mistakes requires changing the dimensions, which may require a categorical shift. Consider the misconception that “coldness from the ice flows into the water, making the water colder.” Essentially, this misconception assumes that ice contains some “cold substance” like tiny cold molecules (the reverse of hot objects, which are often misconceived as containing “hot molecules”), and that this “cold substance” can flow into the surrounding water, which then makes the water colder. We cannot treat this misconception as a false belief and refute it by pointing out that ice does not contain a cold substance, that coldness does not flow, or that water does not get colder because it gains coldness. Refutation only works when a false belief and the correct conception contradict each other on the same dimension. So how can a misconception like “ice contains cold substances” be changed, then? Should a student expect ice to contain an alternative kind of substance if not a “cold” substance? According to our theoretical framework, the change that a student must make has to do with refuting the dimension of “being containable,” not changing the feature of “coldness” or any other kind of sensation or substance. To change the dimension “containable” means that students have to be confronted at the ontological/categorical level, since “containable” is a dimension of Entities, and not a dimension of Processes. Thus, we propose that, in order to achieve radical conceptual change, we need students to make a category shift by reassigning a concept to an alternative lateral category so that a concept can inherit the dimensions of this alternative category. To achieve such reassignment, we need to confront students at the categorical level.

Conceptually, the idea of shifting across or reassigning a concept from one lateral/ontological category to another seems, in principle, to be straightforward and easily achievable, if students were told to shift. Let’s consider the example of a whale. Suppose a young child sees a whale in the ocean and believes it to be a kind of fish, since whales possess many perceptual features of a fish, such as looking like sharks and swimming in water. Based on that mistaken categorization, the child will likely assume that whales, like other fish, breathe through gills by osmosis (a conceptual attribute). To promote conceptual change, we can just tell the child that a whale is a mammal (essentially telling the child to re-categorize or reassign whale to the correct category mammals), perhaps along with providing justification, such as pointing out that whales do not breathe through gills, but through a blowhole. The fact that most children eventually learn that whales are mammals suggests that lateral categorical shifts can occur readily for some misconceptions. This case of reassigning category by telling is shown in the third column of Figure 3.1.

But why is categorical shift not easily achieved for robust misconceptions for Processes such as heat and force? A closer examination of the relative ease of categorical shift for the whale example suggests that two conditions are needed in order to overcome barriers to conceptual change for robust misconceptions. First, students have to be made aware that they have made a category mistake, which requires that their ideas be confronted at the categorical level; and second, students must be knowledgeable about the correct category to which a concept actually belongs. If these two conditions are met, then conceptual change can be made with success even if it requires categorical shifts. We briefly discuss these two conditions below.

Lack of Awareness

We propose that part of the difficulty of shifting categories for many science concepts has to do with a lack of awareness, in that students do not realize that they have to shift their assignment of a concept to a different category. This is because reassigning a phenomenon or concept from one kind to another kind is rare in everyday life. That is, students do not routinely need to re-categorize, such as shifting a whale from fish to mammal, since, in our everyday environment, our initial categorizations are mostly correct. Occasionally, we might over-generalize and categorize at a higher superordinate category, but over-generalization is not incorrect and does not require conceptual change. For example, when we identify a furry object with a wagging tail that responds to our commands as a live dog (thus a living being), we are almost never wrong, in the sense that we might mistakenly identify it as a stuffed dog (thus an artifact). The fact that these category mistakes rarely occur in real life makes it difficult for learners to recognize that the source of their misunderstanding of new concepts originates from a category mistake. As with metaphors, the rarity of category mistakes is a ploy that is sometimes used in stories and films, to produce interest drama, and suspense, such as in the children’s novel Velveteen Rabbit. Moreover, if people do make category mistakes, especially across ontological trees, such as confusing reality (either Entities or Processes) with imagination (Mental States), it is considered bizarre and perhaps a sign of psychological illness.

The rarity of category mistakes in real life is also consistent with findings showing the strength of commitment to the original category to which a concept is assigned, as well as to the boundary between lateral categories (Carey, 1985; Chi, 1988). The commitment to a particular category occurs even as early as age five. For example, once a concept is categorized, young children are extremely reluctant to change the category to which it is assigned. Keil’s work (1989) has shown that, no matter what physical alterations are made to an object (e.g., a live dog), such as shaving off its fur or replacing its tail, five-year-olds will not accept such changes as capable of transforming a live dog to a toy dog (thus crossing the boundary between lateral categories living beings and artifacts). However, they will agree that, with appropriate alterations such as replacing black fur with brown fur, one can transform a skunk into a raccoon. This is because skunks and raccoons belong to the same mammal category. Thus, once assigned, even five-year-olds honor the boundary between kinds and remain committed to the category to which they have assigned a concept.

In short, shifting across lateral categories per se is not a difficult learning mechanism from a computational perspective and from everyday evidence, as illustrated by the whale example above and by the ease with which people can understand metaphors. Metaphors often invoke a predicate or dimension from one category and a concept from a lateral category, often from different ontological trees. For instance, anger (an emotional Mental State) is often treated as a substance (an Entity) that can be contained, as in “He let out his anger” or “I can barely contain my rage” (Lakoff, 1987). Thus, once students are made aware that they have committed category mistakes, shifting across categories can be undertaken readily when students are told or instructed to do so, as in the whale example, or when adults intentionally use metaphors by borrowing properties and values from a dimension of a lateral category.

Knowledge of Alternative Category Available

However, we propose that category mistakes are readily changed primarily when the alternative category is available to the learner who is shifting. Thus, this is the second condition that must be met in order for such category shifts to occur readily when instruction merely tells the students to shift. This type of misconception and ways of changing it are shown in the third column of Figure 3.1. When the alternative category is not available, then misconceptions are tenacious, as explained below.

Missing Schemas: Creating the Missing Alternative Schema

In the preceding section, we proposed that category mistakes, those misconceptions that have been incorrectly assigned to a lateral category, can be changed when students are made aware of the need to shift, and if they know about the alternative category. This section explains why some misconceptions are so tenaciously robust and resistant to change, primarily as caused not only by students’ lack of awareness for the need to change, but most importantly, because they have no knowledge of the alternative category to which a concept belongs. Because we will be addressing more complicated concepts of processes, we will refer to the alternative category as a schema. We begin with an example of failure to transform a flawed mental model successfully, illustrating succinctly what tenacious misconceptions mean, and how they are persistent and resistant to change.

Tenacious Misconception: An Example

Law and Ogborn (1988) carried out a study in which students were asked to use Prolog to design and build a computational model of their own understanding of force and motion. The Prolog programming required students to express their ideas in propositional rule-based statements, which we can consider to be analogous to beliefs. Building and running such a model forced students to externalize and formalize their ideas, making them explicit, explorable, and capable of offering explanations. Students assessed their models by running their programs, then made modifications based on program results or feedback from their instructor. Since programs could be run, allowing students to make predictions and observe outcomes, we can consider such a program to be analogous to an externalized mental model.

As with our circulatory system data, only some students had clear structural frameworks based on a core set of hypotheses about various aspects of motion that the researchers could identify. We can consider these students as having flawed mental models in that their underlying hypotheses are coherent and consistent. Other students had no clear conceptualization, and these students can be deemed to have sparse and incomplete models. For students with flawed but coherent mental models, the question is, can they change their flawed mental model? One way to determine whether they change their mental models is to see whether they change their implicit core hypotheses. One student’s set of core hypotheses about force-and-motion is shown below. These hypotheses (for example, hypothesis b that Force is an entity), can be inferred from their rules (to be described below), and are compatible with various other analyses of students’ misconceptions about force and motion in the literature (e.g., Reiner et al., 2000):

  • Force is the deciding factor in determining all aspects of motion;
  • Force is an entity that can be possessed, transferred, and dissipated;
  • All motions need causes;
  • Agents cause and control motion by acting as sources that supply force;
  • Sources that supply force can be internal or external, and the supplied force is referred to as an internal or external force;
  • Weight is an intrinsic property of an object (even though gravity is conceptualized as an external factor that pulls harder on heavier objects).

The advantage of the Prolog programming environment is that it allowed students to explore the consequences of their externalized beliefs or rules. For example, one student who held the core hypothesis d, that there is a source that supplies the force for every motion, wrote the following Prolog rules for determining the cause of motion:

  1. object motion-caused-by itself if _object force-supplied-by _object
  2. object motion-caused-by machine if _object force-supplied-by machine
  3. object1 motion-caused-by _object2 if_object1 force-supplied-by _object2
  4. object motion-caused-by gravity if not (_object under-the-influence-of other-external-force).

She then tested her program for the cause of a falling apple, expecting the computer to say that the motion was caused by gravity (her fourth rule). The reason was that in one of her earlier sessions, she included weight as an external supply of force, along with other forces such as friction and air current. The program’s outcome can be thought of as providing explicit refutation or confrontation of her fourth rule.

When she did not get the result she expected, she modified her fourth rule by excluding gravity as an external force. After this patching, the computer still did not give her the expected answer of gravity as a cause of the apple’s fall, since anything placed in air would be affected by air current, since air current is an external force. She then revised her fourth rule again to read: _object motion-caused by gravity if not (_object motion-caused-by _something). Her problems continued even after various patchings of her other rules.

This example illustrates clearly the point that, despite numerous revisions of this student’s rules in response to refutations from the outcome of the Prolog program, the revisions and the accumulation of multiple revisions to her rules did not transform her flawed mental model into a correct model, because the underlying core hypotheses of her program were not changed. That is, she still assumed that all motions need causes (hypothesis c), that agents cause and control motion by acting as sources that supply force (hypothesis d), and so forth. What she did change was the value or attribute on the same dimensions, such as changing the agent that was responsible for supplying force. Thus, even though the rules are at the same grain size as a statement of false belief, and the set of rules is comparable to the grain size of a flawed mental model, clearly these misconceptions cannot be considered false beliefs and flawed mental models, because their incorrectness is not on the same dimensions, and they cannot be changed by using a refutation method.

As this example also illustrates, the student was not resistant to change per se, since she readily revised her rules, but the multiple belief revisions she undertook did not add up to a correct model transformation since the revisions did not change her underlying core hypotheses themselves, but only the values of the hypothesized dimensions. There are occasions, of course, when students themselves resist making changes by dismissing the feedback or explaining it away. The point here is that, even with the best of intentions and willingness to change, this student could not transform her misconceived view.

In short, there are many concepts like force and motion, for which one’s initial flawed mental model is not transformed to the correct model despite repeated corrections or patchings of the underlying rules, because it is the dimensions of the flawed model themselves that need to be changed. Even though the student willingly modified individual rules (corresponding to false beliefs) as a result of external feedback (or explicit refutation from the program’s outcomes), the revised rules did not transform the flawed mental model into the correct model, because the implicit underlying core hypotheses were still incorrect from a dimension perspective. Thus, the flawed model was resistant to change. What should we conclude? This suggests that some misconceptions are extremely tenacious only because their refutations occurred at the value level, and not at the dimension level.

Conflict between a Misconceived Schema and a Missing Schema

Findings of tenacious misconceptions, similar to the Law and Ogborn’s (1988) study, have been documented for several decades, in that the misconceptions not only are “in conflict” with the correct scientific conceptions but, moreover, they are almost never revised, so conceptual change is not achieved. Although we were able to explain a good deal of robust misconceptions as category mistakes involving the ontological trees Entities and Processes (Chi, 1997), our explanation for the tenaciousness of many misconceptions was incomplete. Regardless of whether or not students conceive of heat as an Entity, most students nevertheless do recognize that heat transfer is a Process because they have experienced the apparent movement of “hotness” from one location to another, for example from a warm cup to cold hands. Thus, characterizing heat misconceptions merely as Entity-based does not adequately explain why students have difficulty understanding heat transfer, even though they know heat transfer is a Process.

To explain the latter kind of misconceptions, we had to propose conflicts between two additional kinds of lateral categories within the Process tree, which we have called sequential and emergent (Chi, 2005). Our claim is that students misconceive of some processes as the sequential kind when in fact they are the emergent kind. Sequential processes require a direct kind of causal explanation, whereas emergent processes require an emergent kind of causal explanation.

Briefly, the most explicit distinction between a sequential kind of process and an emergent kind is that a sequential process usually has an identifiable agent that causes some outcome or displayed pattern in a more direct (or indirect) way (indirect means mediated by an intermediate agent or event), whereas an emergent kind of process has no identifiable agent that directly (or indirectly) causes the displayed pattern. We will describe an everyday example, a less familiar example, and a scientific example, for each kind of process, highlighting with each example properties of emergent and sequential processes, as listed in Table 3.1 and Table 3.2. The properties in Table 3.1 and Table 3.2 are different in the following way. Table 3.1 lists the attributes characterizing the interlevel causal explanations of the relationships between the behavior/interactions of the agents and the pattern displayed at the macro level. Table 3.2 lists the “second-order interaction features” characterizing some agents’ interactions relative to other agents’ interactions. More detailed descriptions can be found in Chi, Roscoe, Slotta, Roy, and Chase (2012).

Table 3.1   Five attributes characterizing the inter-level causal explanations relating the agents’ interactions (at the micro level) and the pattern (at the macro level) for emergent and sequential processes

Emergent causal explanations for emergent processes

Direct causal explanations for sequential processes

1. The entire collection or all the agents together “cause” the observable global pattern

1. A single agent or a sub-group of agents can “cause” the global observable pattern

2. All agents have equal status with respect to the pattern

2. One or more agents have special status with respect to the pattern

3. Local events and the global pattern can behave in disjointed non-matching ways

3. Local events and the global pattern behave in a corresponding matched way

4. Agents interact to intentionally achieve local goals; ignorant of the global pattern

4. Some agents interact to intentionally achieve the global goal and direct their interactions at producing the global pattern

5. Mechanism producing the global pattern: proportional change (collective summing across time)

5. Mechanism producing the global pattern: incremental change (additive summing across time)

Table 3.2   Five “second-order interaction features” characterizing the relationships between some agents’ interactions relative to other agents’ interactions

Interactions among agents in emergent processes

Interactions among agents in sequential processes

1. All agents behave in more or less the same uniform way

1. Agents behave in distinct ways

2. All agents interact randomly with other agents

2. Agents can interact with predetermined or restricted others

3. All agents interact simultaneously

3. Agents interact sequentially

4. All agents interact independently of one another

4. Agents’ interactions depend on other agents’ interactions

5. Interactions among agents are continuous

5. Agents’ interactions terminate when the pattern-level behavior stops

Sequential example 1. In the familiar process of a baseball game, the final outcome might be explained as being due to the excellent work of the pitcher, thus attributing the outcome directly to a single agent (Sequential attribute #1, Table 3.1), thus elevating this single agent with special status (Sequential attribute #2). Moreover, the behavior of local events within the game corresponds to or aligns with the global outcome. For example, a team with many home runs in a game is more likely to win. Thus, the more home runs align with the higher scores (Sequential attribute #3).

Sequential example 2. A slightly less familiar example is seeing multiple airplanes flying in a V-formation. This V-pattern is intentional, created by the lead pilot telling the other pilots where to fly in order to achieve the global goal (Sequential attribute #4).

Sequential example 3. A sequential process from biology is cell division, which proceeds through a sequence of three stages. The first, interphase, is a period of cell growth. This is followed by mitosis, the division of the cell nucleus, and then cytokinesis, the division of the cytoplasm of a parent cell into two daughter cells. In each phase, the cells behave in distinct ways, either growing or dividing (Sequential feature #1, Table 3.2). Such a process has a definite sequence, in which some events cannot occur until others are completed (Sequential features #3 & #4, Table 3.2).

In contrast, emergent processes have neither an identifiable causal agent or agents nor an identifiable sequence of stages. Rather, the outcome results from the collective and simultaneous interactions of all agents. Let’s consider three examples here as well.

Emergent example 1. The process of a crowd forming a bottleneck, as when the school bell rings and students hurry to get through the narrow classroom door, is an everyday example of an emergent process. Although there is an external trigger (the school bell), the global outcome of forming a bottleneck cannot be attributed to any single agent or group of agents, and the process is not sequential. Instead, all the students (Emergent attribute #1, Table 3.1) simultaneously (Emergent feature #3) rush toward the door at about the same speed (Emergent feature #1), shoving and bumping randomly into whichever student happens to be in the way (Emergent feature #2). See Table 3.2.

Emergent example 2. A slightly less familiar example is migrating geese flying in a V-formation. In contrast to the airplane example, the V-pattern is not caused by the leader goose telling other geese where to fly. Instead, all the geese are doing the same thing, flying slightly behind another goose because instinctually they seek the area of least resistance. Thus, they are pursuing the local goal of flying with minimal effort (Emergent attribute #4), ignorant of the pattern they form. When all the geese do the same thing at the same time, collectively, a V-pattern emerges (Emergent attributes #1, #2, and emergent features #1 and #3).

Emergent example 3. An emergent process from biology is the diffusion of oxygen from the lungs to the blood vessels. This process is caused by all the oxygen and carbon dioxide molecules moving and colliding randomly with and independently of each other (Emergent features #1, #2, #3, #4). From such random collisions, a greater number of oxygen molecules are likely to move from the lungs to the blood than from the blood to the lungs, simply because there are a greater number of them in the lungs than in the blood. The reverse is true for carbon dioxide molecules. Since all molecules interact by colliding randomly, both kinds of molecules move in both directions, so that some oxygen molecules do move from the blood to the lungs, and some carbon dioxide molecules do move from the lungs to the blood. Thus, the local movements of individual molecules may not match the direction of the movement of the majority of the molecules (Emergent attribute #3). Nevertheless, despite local variations, the majority of oxygen molecules end up moving from the lungs to the blood, and the majority of carbon dioxide molecules end up moving in the opposite direction, without any specific intention to move in that global direction (Emergent attribute #4).

The Source of Tenacious Misconceptions

We said above that to change at the lateral categorical level, one approach is to tell students directly to shift categorically. However, an intervention of direct telling would not work between the sequential process category and the emergent process category, because we assume that students have no knowledge of the emergent category or emergent schema. If students have no knowledge of an emergent category, how can instruction facilitate conceptual change? Two major steps are required. First, students must learn to differentiate the two kinds of processes, and second, students must build knowledge of an emergent schema. We elaborate these instructional challenges below.

Differentiating the Two Kinds of Processes

The preceding examples illustrate that many phenomena in science look and act like they belong to one category rather than another. For example, heat flowing into a cool room feels like water flowing down a stream. However, the causal explanations for the similar (heat and water) patterns are distinctly different. Thus, learners can be misled by perceptual similarities at the pattern level and treat such pairs of phenomena as having the same causal explanations, resulting in miscategorization of one but not the other. Therefore, students must be made aware of their miscategorization, and in addition, must learn to discriminate between the two kinds of phenomena and to generate a correct causal explanation for the behavior at the pattern level. In short, the lack of awareness of the need to shift categories laterally is due to the low frequency of such shifts in the real world and to superficial pattern-level similarities among many phenomena. As in the case of other category mistakes, instruction aimed at promoting such shifts must begin by making students aware that they have committed category mistakes. This requires that instruction help students overlook superficial perceptual similarities at the pattern level that cause students to misconceive two kinds of processes as the same kind when in fact they are different kinds requiring different kinds of causal explanations.

But how can instruction facilitate a discrimination of two different kinds of processes? An obvious answer might be to look at the agent level, and see how the interactions among the agents are different for the two processes. But can we discriminate sequential from emergent processes just by examining the way the agents interact? For example, with close scrutiny, the interactions of the molecules in the process of heat transfer do look slightly different from the interactions of the water molecules in the process of water flowing downstream. Water flowing downstream is a sequential process, caused by the water molecules in one area of the stream being pushed by molecules in the area above it, so that the molecules that are being pushed move downstream a little, and then push the molecules next to them to an even lower area, and so on. In contrast, the sensation of hotness moving from one area to another area (heat flowing) is not a sequential process in that the sensation of hotness moving is not caused by hot molecules moving from one location to another. Rather, heat flowing or transfer is caused by the collisions of faster jostling “hotter” molecules into slower-moving molecules. That is, when faster-moving molecules collide with slower-moving molecules, the collisions cause the faster-moving molecules to slow down (thus decreasing their hotness) and the slower-moving molecules to move faster (thus increasing their hotness). This is how hotness is transferred. Thus, heat transfer is an emergent process. See Figure 3.2 again.

Thus, heat transfer and water flowing do have different interaction mechanisms at the agent level. Unfortunately, differences in the interactions at the agent level do not, by themselves, distinguish between sequential and emergent processes, because interactions of many emergent processes can also differ among themselves (and the same is true for sequential processes). For example, the interactions of molecules in a diffusion process is one of random collisions, whereas the interactions of birds and moths in the process of natural selection, in which moths got darker over time in industrialized England, is one of birds eating moths. Thus, the two sets of interactions are quite different, even though both processes (diffusion and natural selection) are emergent. Thus, looking at the mechanism of the interactions per se cannot help students discriminate between emergent and sequential processes.

One solution to helping students discriminate between sequential and emergent processes, even though they look similar at the perceptual pattern level, is to point out second-order relational differences. For example, Table 3.2 lists “second-order interaction features” characterizing the relationships between some agents’ interactions relative to other agents’ interactions. By second-order, we mean the relational differences, comparing the nature of one interaction with another interaction. Feature #1 (in Table 3.2), for example, refers to the point that the interactions of two agents of a sequential process are different (or distinct) from the interactions of two other agents of the same process. In contrast, the interactions between two agents in an emergent process are the same (uniform) as the interactions of two other agents in the same process. Thus, even though the interacting mechanism of birds eating moths in the process of natural selection is different from the interacting mechanism of molecules colliding with each other in the process of diffusion, they share the same second-order feature of uniformity, meaning that all molecules interact in the same way, colliding with each other; and similarly, all birds and moths interact in the same way, being eaten or not being eaten by birds. Thus, these two processes can both be categorized as emergent. On the other hand, in the sequential baseball game example mentioned above, the interactions of some of the agents (let’s say between the pitcher and the batter) are obviously different from the interaction between the pitcher and the catcher who stands behind the batter. Thus, the interactions among the agents in a sequential process are not uniform. In short, by looking at the second-order interaction features, one can discriminate a sequential process from an emergent process.

Creating the Missing Schema

In contrast to the whale example, in which it seemed relatively easy for children to shift categories simply by being told that whales are mammals, would science students find it easy to shift categories if we simply told them that heat transfer is an emergent rather than a sequential process? The answer is obviously no, because students are ignorant of ideas about emergence. Thus, we assume that the second challenge of changing tenacious misconceptions of the emergent kind is that an emergent process category is not familiar and available to students and therefore they cannot shift and use it to assimilate novel concepts. This missing schema situation is tractable and suggests an instructional approach of building such a schema. Thus, in the case of tenacious misconceptions, instruction to promote categorical shift must also include instruction to help students first build a schema about emergence. The term “schema” is more appropriate than the term “category” for describing knowledge of emergent processes because schema is a more encompassing term, including ways of generating causal explanations for understanding emergent processes. Our prediction is that, to achieve successful conceptual change for tenaciously misconceived concepts and phenomena, we need to first teach students the properties of such an emergent schema, which is uniquely distinct from the direct schema for sequential processes, with which they are familiar and to which they have mistakenly assigned concepts. Once students have successfully built such an alternative schema with its distinct set of properties (as shown in Tables 3.1 and 3.2), they can begin to assimilate new instruction (for example, about heat transfer) into the category. Preliminary successes using this instructional method have been shown in Slotta and Chi (2006), and Chi et al. (2012). This intervention method is shown in the last column of Figure 3.1.

Summary

This chapter addresses the problem of learning for which prior knowledge conflicts with the to-be-learned information. This kind of learning is considered the conceptual change kind rather than the enrichment kind. We propose that prior knowledge can conflict with to-be-learned information in two basic ways: Prior knowledge can be incorrect in contradicting correct information on the same dimension, or prior knowledge can be incorrect in the dimensions themselves. In the former case, conceptual change can be achieved by refutation (implicitly or explicitly), either at the belief level or at a mental model level; and at both levels, conceptual change can be successfully achieved. The success of these types of refutations for false belief and flawed mental models hinges on the assumption that the misconception and the correct conception are assigned into the same category or hierarchical categories, so that they share the same dimensions as defined by their categorical membership. Therefore, the incorrect prior knowledge conflicts in an inaccurate sense. However, in the latter case in which incorrect prior knowledge conflicts with correct knowledge in an incommensurate sense, in that the source of misconceptions arises from a mis-assignment between categories on lateral branches or ontological trees, conceptual change requires a categorical shift. Such a shift necessitates that the learner is aware that the shift is needed and that the correct category is available. For many tenacious misconceptions in science, the lateral category or schema to which misconceptions have to be reassigned, emergent processes, does not exist in students’ knowledge base, so instruction has to build a new schema. Because emergent and sequential processes are different in kind, with mutually exclusive properties, confrontation needs to reject the mis-assigned direct schema for interpreting emergent processes, and build the alternative emergent schema, perhaps through direct instruction using contrasting cases. Of course, the original direct schema needs to remain, as it is important for understanding other sequential processes.

A preliminary attempt at helping students build the missing emergent schema is discussed in Chi et al. (2012). Thus, this chapter has provided a theoretical framework that offers definitions of four different ways that prior misconceived knowledge can conflict with correct knowledge, explained why some type of misconceptions are more robust than others, and prescribed various instructional intervention methods to remove misconceptions as a function of their specific type.

Acknowledgments

The author is grateful for funding and support provided by the Spencer Foundation (Grant No. 200100305 and Grant No. 200800196) and comments from Dongchen Xu.

References

Broughton, S. H. , Sinatra, G. M. , & Reynolds, R. E. (2007). Refutation text effect: Influence on learning and attention. Paper presented at the Annual Meeting of the American Educational Researchers Association, Chicago, IL.
Carey, S. (1985). Conceptual change in childhood. Cambridge, MA: MIT Press.
Carey, S. (1991). Knowledge acquisition: Enrichment or conceptual change? In S. Carey & R. Gelman (Eds.), The epigenesis of mind (pp. 257–291). Hillsdale: NJ: Lawrence Erlbaum Associates.
Chi, M. T. H. (1988). Children’s lack of access and knowledge reorganization: An example from the concept of animism. In F. Weinert & M. Perlmutter (Eds.), Memory development: Universal changes and individual differences (pp. 169–194). Hillsdale, NJ: Lawrence Erlbaum Associates.
Chi, M. T. H. (1992). Conceptual change within and across ontological categories: Examples from learning and discovery in science. In R. Giere (Ed.), Cognitive models of science: Minnesota Studies in the Philosophy of Science, (pp. 129–186). Minneapolis, MN: University of Minnesota Press.
Chi, M. T. H. (1997). Creativity: Shifting across ontological categories flexibly. In T. B. Ward , S. M. Smith , & J. Vaid (Eds.), Conceptual structures and processes: Emergence, discovery and change (pp. 209–234). Washington, DC: American Psychological Association.
Chi, M. T. H. (2000). Cognitive understanding levels. In A. E. Kazdin (Ed.), Encyclopedia of psychology (Vol. 2, pp. 146–151). Washington, DC: American Psychological Association.
Chi, M. T. H. (2005). Common sense conceptions of emergent processes: Why some misconceptions are robust. Journal of the Learning Sciences, 14, 161–199.
Chi, M. T. H. , de Leeuw, N. , Chiu, M. H. , & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439–477.
Chi, M. T. H. , & Hausmann, R. G. M. (2003). Do radical discoveries require ontological shifts? In L. V. Shavinina (Ed.) International handbook on innovation (pp. 430–444). Oxford, UK: Pergamon.
Chi, M. T. H. , & Roscoe, R. (2002). The processes and challenges of conceptual change. In M. Limon & L. Mason (Eds.), Reconsidering conceptual change: Issues in theory and practice (pp. 3–27). Dordrecht, The Netherlands: Kluwer.
Chi, M. T. H. , Roscoe, R. , Slotta, J. , Roy, M. , & Chase, C. C. (2012). Misconceived causal explanations for emergent processes. Cognitive Science, 36, 1–61.
Chi, M. T. H. , Slotta, J. D. , & de Leeuw, N. (1994). From things to processes: A theory of conceptual change for learning science concepts. Learning and Instruction, 4, 27–43.
Confrey, J. (1990). A review of the research on student conceptions in mathematics, science and programming. In C. B. Cazden (Ed.), Review of research in education. Washington, DC: American Educational Research Association.
de Leeuw, N. (1993). Students’ beliefs about the circulatory system: Are misconceptions universal? In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society (pp. 389–393). Hillsdale, NJ: Lawrence Erlbaum Associates.
Driver, R. , & Easley, J. (1978). Pupils and paradigms: A review of literature related to concept development in adolescent science students. Studies in Science Education, 5, 61–84.
Driver, R. , Squires, A. , Rushworth, P. , & Wood-Robinson, V. (1994). Making sense of secondary science. London, UK: Routledge.
Duit, R. (2008). Bibliography – CTCSE: Students’ and teachers’ conceptions and science education. Available at: www.ipn.uni-kiel.de/aktuell/stcse/stcse.html (retrieved June 1, 2009).
Gadgil, S. , Nokes, T. J. , and Chi, M. T.H. (2011). Effectiveness of holistic mental model confrontation in driving conceptual change. Learning and Instruction, 22, 47–61.
Gelman, S. (1988). The development of induction within natural kind and artifact categories. Cognitive Psychology, 20, 65–95.
Gentner, D. , & Stevens, A. L. (Eds.). (1983). Mental models. Hillsdale, NJ: Lawrence Erlbaum Associates.
Guzzetti, B. J. , Snyder, T. E. , Glass, G. V. , & Gamas, W. S. (1993). Promoting conceptual change in science: A comparative meta-analysis of instructional interventions from reading education and science education. Reading Research Quarterly, 28, 116–159.
Keil, F. (1989). Concepts, kinds, and cognitive development. Cambridge, MA: MIT Press.
Lakoff, G. (1987). Women, fire, and dangerous things: What categories reveal about the mind. Chicago, IL: University of Chicago Press.
Law, N. , & Ogborn, J. (1988). Students as expert system developers: A means of eliciting and understanding commonsense reasoning. Journal of Research on Computing in Education, 26, 497–514.
McCloskey, M. (1983). Naïve theories of motion. In D. Gentner & A. L. Stevens (Eds.), Mental models (pp. 299–324). Hillsdale, NJ: Lawrence Erlbaum Associates.
Novak, J. D. (1977). A theory of education. Ithaca, NY: Cornell University Press.
Ram, A. , Nersessian, N. J. , & Keil, F. C. (1997). Special issue: Conceptual change. Journal of the Learning Sciences, 6, 1–91.
Reiner, M. , Slotta, J. D. , Chi, M. T. H. , & Resnick, L. B. (2000). Naïve physics reasoning: A commitment to substance-based conceptions. Cognition and Instruction, 18, 1–34.
Samarapungavan, A. , & Wiers, R. W (1997). Children’s thoughts on the origin of species: A study of explanatory coherence. Cognitive Science, 21, 147–177.
Schwartz, S. P. (1977). Introduction. In S. P. Schwartz (Ed.), Naming, necessity and natural kinds (pp. 13–41). Ithaca, NY: Cornell University Press.
Slotta, J. D. , & Chi, M. T. H. (2006). The impact of ontology training on conceptual change: Helping students understand the challenging topics in science. Cognition and Instruction, 24, 261–289.
Sommers, F. (1971). Structural ontology. Philosophia, 1, 21–42.
Thagard, P. (1990). Concepts and conceptual change. Synthese, 82, 255–274.
Vosniadou, S. (2004). Extending the conceptual change approach to mathematics learning and teaching. Learning and Instruction, 14, 445–451.
Vosniadou, S. , & Brewer, W. F. (1992). Mental models of the earth: A study of conceptual change in childhood. Cognitive Psychology, 24, 535–585.
Vosniadou, S. , & Brewer, W. F. (1994). Mental models of the day/night cycle. Cognitive Science, 18, 123–183.
Wiser, M. (1987). The differentiation of heat and temperature: History of science and novice–expert shift. In S. Strauss (Ed.), Ontogeny, phylogeny, and historical development (pp. 28–48). Norwood, NJ: Ablex.
Wiser, M. , & Amin, T. (2001). “Is heat hot?” Inducing conceptual change by integrating everyday and scientific perspectives on thermal phenomena. Learning and Instruction, 11, 331–353.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.