On the purpose of mathematics education research

Making productive contributions to policy and practice

Authored by: Dylan Wiliam , Frank K. Lester

Handbook of International Research in Mathematics Education

Print publication date:  June  2008
Online publication date:  April  2010

Print ISBN: 9780805858754
eBook ISBN: 9780203930236
Adobe ISBN: 9781135192761

10.4324/9780203930236.ch3

 

Abstract

Why and for whom is research in mathematics education conducted? Is our research, as some cynically insist, simply an activity pursued by “ivory tower” academics intent on publishing articles that are read only by other academics? Or, as others believe, is its purpose to promote the development of robust theories about the teaching and learning of mathematics? Some hold yet another view, namely that research should focus on the pursuit of knowledge that causes real, lasting changes not only in the way people think about learning and teaching, but also in how they act. In this chapter we discuss these and related questions and propose a way to think about mathematics education research that can serve to move us toward making productive contributions to both policy and practice. The first part of the chapter deals with how (and for whom) research in mathematics education has been carried out while the second discusses what counts as evidence in mathematics education research.

 Add to shortlist  Cite

On the purpose of mathematics education research

Why and for whom is research in mathematics education conducted? Is our research, as some cynically insist, simply an activity pursued by “ivory tower” academics intent on publishing articles that are read only by other academics? Or, as others believe, is its purpose to promote the development of robust theories about the teaching and learning of mathematics? Some hold yet another view, namely that research should focus on the pursuit of knowledge that causes real, lasting changes not only in the way people think about learning and teaching, but also in how they act. In this chapter we discuss these and related questions and propose a way to think about mathematics education research that can serve to move us toward making productive contributions to both policy and practice. The first part of the chapter deals with how (and for whom) research in mathematics education has been carried out while the second discusses what counts as evidence in mathematics education research.

How and for Whom has Research in Mathematics Education Been Done?

In the 1990s, at least three carefully researched, rather comprehensive, English-language compendia were written on the state of the field’s knowledge about mathematics teaching and learning (Bishop, Clements, Keitel, Kilpatrick, & Laborde, 1996; Grouws, 1992; and Sierpinska & Kilpatrick, 1998). Will any of these volumes have any real impact on the practices of teachers? Will any of them have any direct influence on educational policy? We think it unlikely and we believe that the failure of publications such as these to resonate with the interests, needs, and concerns of practitioners is because the research presented in them was concerned primarily with the pursuit of “knowledge” (in the sense of collections of items of generally agreed upon information) and developing theories, rather than focusing on actually moving people—teachers, teacher educators, school administrators, policy makers, etc.—to action.

How has the research been done?

For most of the history of research in mathematics education, the predominant way of learning about and understanding phenomena related to the teaching and learning of mathematics has been based in the tradition of scientific rationalism—we have wanted to emulate the successes of the physical sciences (Lester & Lambdin, 2003); successes that have stemmed largely from the fact that the meanings of the results of experimental data are likely to be agreed upon across a wide range of contexts, and by a large proportion of the research community. In the writing up of such research, the authors have assumed that the text produced has the same meaning to the vast majority of readers, and applies across a wide range of contexts. Only recently have mathematics educators come to realize that these “objective” 1 methods are often not appropriate to address educational research problems (Wiliam, 1998).

For some, this lack of success can be attributed to the fact that educational research has not yet developed into a fully mature science. Thomas Kuhn, in his classic book, The Structure of Scientific Revolutions, described natural philosophy (i.e., the study of nature) before the Renaissance as a “pre-science” (Kuhn, 1962), indicating a period in which there was no agreement about basic principles and ways of working. During the Renaissance, however, there was increasing agreement about methods of inquiry, leading to a period of stability which Kuhn termed “normal science.” Currently educational research shares many (although not all) of the features of a “pre-science” but what this means for the future of educational research is not clear. For some, only when educational researchers agree about how one goes about creating knowledge in educational research will education start producing “reliable knowledge” (Zinman, 1978) and become a “proper science.” However, for others, the very nature of the educational activity—the complexity of the objects of study—means that educational research can never become a “science” in the traditional (and narrow) sense.

One of the most salient differences between research in mathematics education and research in the physical sciences is the importance accorded to “context.” In the physical sciences, issues of context rarely arise. This is because differences between situations that are not represented in our theoretical models rarely make a difference. For example, physical scientists know that they need to record the current passing through an ammeter, but don’t need to record the color of the ammeter. In this sense, theorizations in the physical sciences are relatively complete. In contrast, research in mathematics education frequently generates results that hold in some settings but not in others. For example, studies of the effectiveness of feedback to learners have produced mixed results. In some studies, giving feedback was found to be effective in improving learning, while in other studies, there was no clear effect. One way to interpret this is to attribute the differences to the effects of “context” (much in the same way that statisticians describe anything not accounted for in their models as “error”). However, such differences can also be interpreted as pointing to the need for further theory building. When Kluger and De Nisi (1996) found huge differences in the effectiveness of feedback in improving performance, they looked carefully at the kinds of feedback involved in the different studies. They found that feedback that focused on how well the individual was doing (ego-involving feedback) produced very small (and sometimes slightly negative) effects. Feedback that focused on what the individual needed to do to improve (task-involving feedback) on the other hand produced significant gains. Furthermore they found that feedback was even more effective when, as well as focusing on what needed improvement, the feedback indicated how to go about such improvement.

Given the complexity of classrooms and other learning environments, we do not believe that we will ever reach a situation in which our theorization of educational settings will approach the completeness of that in the physical sciences. However, we do suggest that progress in research in mathematics education will benefit from regarding the effects of “context” as opportunities for the further development of theory.

A distinct, but related, difficulty in research in mathematics education is that even when we collect the “right” data, phenomena interact with each other in ways that are likely to be impossible to predict. In this context, we note that an increasing number of philosophers of science and others have recognized, through work in chaos and complexity theory, that many physical systems show the kind of unpredictability prevalent in the social sciences. In particular, the sensitivity of educational phenomena to small changes in detail means that it is literally impossible to put the same innovation into practice in the same way in different classrooms. The difficulty of putting into practice the fruits of educational research suggests we need a different way of thinking about educational inquiry.

Typically in the physical sciences, reliable knowledge is produced by searching for patterns that are common across a range of contexts, and by looking at broad trends. For example, the behavior of individual molecules in a gas is impossible to predict, but the aggregate behavior of large numbers of such molecules can be predicted quite well. Similarly, it is impossible to predict which people will die in a given year but the actuarial sciences have developed sophisticated and accurate methods for predicting the numbers of people dying of particular causes.

In education, however, because of weak theorization, increasing sample sizes to “average out” the effects of context often ends up producing only bland platitudes, which seem only to “tell us what we already knew.” 2

However, the fact that educational research cannot (or at least at the moment does not) produce transcendent truths does not mean that such research cannot be useful, and we suggest that a focus on the usefulness of educational research—and in particular its fertility in suggesting more appropriate courses of action—is a more relevant criterion than the reliability and transcendence of the knowledge produced.

In the 1960s and 1970s, partly as a response to the failure of educational research to have much impact on practice, there was a surge in interest in different ways of finding out about and understanding educational processes, principally derived from the more qualitative approaches that had been developed in sociology and anthropology. In particular, many studies addressed the problem of context by looking in detail at a single educational site (see, for example, Lacey, 1970). In such a study, the problem of context is tackled not by trying to average out across all contexts, but by attending to the details of the particular context. The actual setting for the research is laid out in considerable detail, and readers can make up their own minds about how convincing is the evidence and the conclusions drawn from it. However, this does not absolve the researcher from any responsibility about how the research is conducted. Indeed, it is still incumbent on researchers to conduct their inquiry and to report their findings in such a way that the meanings of such findings will be shared, to a greater or lesser extent, by various readers.

For whom is the research intended?

The Journal for Research in Mathematics Education, an official journal of the National Council of Teachers of Mathematics, is devoted to the interests of teachers of mathematics and mathematics education at all levels—preschool through adult. (inside front cover of every issue of the journal)

It is an unfortunate fact that too few teachers and other education practitioners pay attention to the research that is so carefully and thoughtfully reported in research journals such as the Journal for Research in Mathematics Education (JRME). But, this is not a new development. Writing on the occasion of the publication of the journal’s inaugural issue in 1970, then-president of the National of Teachers of Mathematics (NCTM), Julius Hlavaty, suggested that the purpose of the new journal was “to give the teacher in the classroom, the administrator and curriculum consultant at the planning level, and even the man in the street [sic], the information, guidance, and help that research can provide” (Hlavaty, 1970, p. 7).

Despite Hlavaty’s vision, opinions about the relevance of the JRME for teachers remained mixed for several years after the establishment of journal. For example, in the May 1978 issue of the journal, then NCTM president John Egsgard, himself a classroom teacher, insisted:

Until the mathematics-education research community can come up with results that will affect the classroom teacher, be it an elementary school teacher, a junior high teacher, a secondary school teacher, a community college teacher, or a teacher of mathematics education, I do not believe that the Council would be justified in providing additional resources for research. (p. 241)

The sentiment among many practitioners is not so different today than it was more than 20 years ago. Among the many explanations proposed for the failure of our research to resonate with teachers, one that has not been given adequate attention by mathematics educators is that researchers and teachers have different ways of validating what they know and believe about mathematics teaching and learning. They also accept different ways to frame their discourse about what they know and believe. Many researchers tend to seek validation for knowledge claims by means of formal research that adheres to certain rules of procedure, including such matters as reliability and validity. By contrast, teachers often rely on personal judgments and social (dialogical) discourse to determine “what works for me” (Hargreaves, 1998). Glaser, Abelson, and Garrison (1983), in discussing how research results are put into practice, summarize the distinctions between researchers and teachers as follows:

The differences include: a tendency to live in two different professional communities, or “worlds”; distinctive cognitive styles; responsiveness to divergent rewards; and different beliefs about how knowledge can best contribute to human welfare. (p. 395)

If researchers and teachers live in two different “worlds,” it seems natural that they would also communicate differently about phenomena occurring in those worlds. Indeed, Schwandt (1995, 1996) has suggested that the reason for the lack of perceived relevance of most educational research for teachers and other practitioners can be attributed to how members of these communities communicate their ideas. He insists that many researchers communicate their ideas in terms of (monological) scientific rationalism, whereas teachers—and some researchers—tend to communicate their ideas through, “the lens of dialogical, communicative rationalism” (Schwandt, 1995, p. 1). In the following sections we elaborate on these two ways to communicate.

(Monological) Scientific Rationalism

According to Schwandt (1995), scientific rationalism is a style of inquiry shaped by six principles:

  1. True knowledge begins in doubt and distrust.
  2. Engaging in this process of methodical doubting is a solitary, monological activity.
  3. Proper knowledge is found by following rules and method (rules permit the systematic extension of knowledge and ensure that nothing will be admitted as knowledge unless it satisfies the requirements of specified rules).
  4. Proper (i.e., scientifically respectable) knowledge requires justification, or proof.
  5. Knowledge is a possession and an individual knower is in an ownership relation to that knowledge.
  6. In justifying claims to knowledge there can be no appeal other than to reason. (pp. 1–2)

Of special concern for scientific rationalists are the nature of the claims that are made and how these claims should be justified. Furthermore, all the ways deemed acceptable for justifying a claim are regarded as uncertain or unreliable in one way or another. Historically, scientific rationalists typically employ four basic types of argument to justify claims: (1) argument by example to arrive at some sort of generalization, (2) argument by analogy (because phenomenon A is like phenomenon B in certain ways, the researcher argues that they are also alike in another specific way of interest), (3) argument from authority (the use of existing literature to support a position or help make a case), and (4) argument from statistical inference. Examples of the use of each of these types of argument are easy to identify in issues of our mathematics education research journals. Adherents of scientific rationalism accept that each of these methods of justification is readily subject to the error of reaching a conclusion with insufficient evidence or to the error of overlooking alternative explanations.

(Dialogical) communicative rationalism

As explained by Shotter (1993, p. 166), communicative rationalism opposes scientific rationalism in three fundamental ways. First, rather than regarding the social world as “out there waiting to be discovered,” the communicative rationalist insists that the world can only be studied from a position of involvement within it. 3 Second, “knowledge of [the] world is practical-moral knowledge and does not depend upon justification or proof for its practical efficacy.” Third, “we are not in an “ownership” relation to such knowledge, but we embody it as part of who and what we are.” Thus, communicative rationalism provides a different way to consider what it means to know. “Instead of simple observational claims about objects, knowing other people is offered as a paradigm for knowledge” (Schwandt, 1995, p. 7). When we adopt a communicative rationalistic approach to research, “we come to understand that the apparently orderly, accountable, self-evidently knowable and controllable characteristics of both ourselves and our social forms of life are constructed upon a set of disorderly, contested, conversational forms of interaction” (Schwandt, 1996, p. 14). And, these “conversational forms of interaction” help us develop knowledge of our practices and ourselves. Shotter suggests that to Ryle’s (1949) two kinds of knowledge—knowing that and knowing how—we should add a third type: knowing from. This type is characterized as knowledge “one has from within a situation, a group, a social institution, or society” (Shotter, 1993, p. 19).

To accept communicative rationalism involves accepting that reason is dialogical in nature: “It is concerned with the construction and maintenance of conversational reality in terms of which people influence each other not just in their ideas but in their being” (Schwandt, 1995, p. 7).

The implications of communicative rationalism for mathematics education research may not be immediately apparent, but they at least involve how we make and justify claims in our research; how we go about convincing others of the claims we make as a result of our research; and how we defend our claims on ethical and practical grounds. In particular, communicative rationalism attempts to avoid treating students and teachers as objects of thought in order to make claims about them that will guide future deliberative actions. Instead, it aims to include researchers, teachers (and students) in dialogical conversations in order to generate practical knowledge in specific situations. Thus, claims are made only after the various perspectives (or worldviews, background assumptions, and beliefs, etc.) of all those engaged in the dialogue have been openly considered and negotiated. Schwandt and Shotter believe that it is this process of open negotiation of claims (and of what is regarded as evidence) among all participants in the discourse that ultimately moves people to take action to change.

Scientific rationalism therefore differs from communicative rationalism not only in how knowledge is warranted, but also in what is to count as knowledge. Within communicative rationalism, the practical knowledge that teachers possess in the contexts of their classrooms—how to make complex, nuanced judgments in the face of considerable complexity—is to be counted as knowledge just as much as the decontextualized, transcendent, but often difficult-to-apply “truths” of scientific rationalism. From the perspective of scientific rationalism, the failure of teachers to take on board the findings of educational research may be viewed as inexplicable (or at least irrational). From the perspective of communicative rationalism, however, the reasons for the failure of “center-to-periphery” models of dissemination are all too clear: a huge part of the “knowledge”—specifically how to make it work in practice—is missing. This knowledge is missing because, as remarked above, the relatively incomplete theorization in mathematics education research means that the explicit knowledge available from the research literature does not tell teachers what to do. There are a whole range of choices for the teacher to make in a given situation, all of which are consistent with the findings of research, leaving the teacher to choose amongst the alternatives, using their knowledge of the students, the school context, and a range of other variables. The sheer complexity of classroom and school life, and the speed with which decisions often have to be made, means that the knowledge that is brought into play by teachers in making decisions is largely implicit rather than explicit.

The complementary roles of tacit and explicit knowledge are brought out clearly in the model of knowledge-creation in organizations developed by Nonaka and Tageuchi (1995). They begin by observing that while some of the knowledge possessed by individuals in organizations is explicit, much of it is tacit knowledge, and the extent of this tacit knowledge is often unrecognized. Indeed, organizations frequently discover what an individual knows only after that person has left the organization!

The existence of two types of knowledge—explicit and tacit—results in four different modes of knowledge conversion, as shown in Figure 3.1 (the distinction between explicit and tacit is in reality, of course, a continuum, but for reasons of clarity it is presented as a dichotomy in the figure). The process of socialization can be viewed as one of passing on existing tacit knowledge to others, while externalization involves making tacit knowledge explicit. Developing new explicit knowledge from existing explicit knowledge is a process of combination and internalization that consists of making explicit knowledge one’s own.

Four modes of knowledge conversion (after

Figure 3.1   Four modes of knowledge conversion (after Nonaka & Tageuchi, 1995).

Nonaka and Tageuchi (1995) propose that these four processes typically occur in the following sequence:First, the socialization mode usually starts with building a “field” of interaction. This field facilitates the sharing of members” experiences and mental models. Second, the externalization mode is triggered by meaningful “dialogue or collective reflection,” in which using appropriate metaphor or analogy helps team members to articulate hidden tacit knowledge that is otherwise hard to communicate. Third, the combination mode is triggered by “networking” newly created knowledge and existing knowledge from other sections of the organization, thereby crystallizing them into a new product, service or managerial system. Finally, “learning by doing” triggers internalization (pp. 70–71).

What this analysis makes clear is that scientific rationalism is concerned only with those situations in which one person’s explicit knowledge is transmitted to others as explicit knowledge—(bottom-right cell of Figure 3.1). Communicative rationalism, on the other hand, involves all the kinds of knowledge-creation shown in Figure 3.1.

Implications for research practice

From the foregoing it should be clear that we believe that scientific rationalism has a place in educational inquiry. However, particularly for the kinds of phenomena studied in educational research, other kinds of knowledge-building processes are also absolutely necessary if educational research is to inform educational practice.

It should also be clear that we are not advocating abandoning concern for careful argument and evidence in favor of some sort of political rhetoric devoid of reason. Instead, we are promoting a renewal of a sense of purpose for our research activity that seems to be disappearing: namely, a concern for making real, positive, lasting changes in what goes on in classrooms. 4 We suggest that such changes will occur only when we become more aware of and concerned with sharing of meanings across researchers and practitioners.

Communicative rationalism, then, is intended actually to move people to action, in addition to giving them good ideas. That is, it aims to cause people to sit up and take notice; to do something as a result of the dialogue in which they have engaged. In order to move others to action the claims researchers argue for must involve careful attention to what researchers share with their intended audiences and what distinguishes the researchers from them. By so doing researchers become more familiar with other ways of thinking about their data (i.e., they are able to consider how defensible their claims are in comparison with those of others) and they become better prepared to consider the ethical consequences of their claims.

What Role does (Should) Evidence Play in Moving People to Action?

The relationship between different approaches to research can be illuminated by using ideas from hermeneutics: the name given to the study of interpretation (named after Hermes, the messenger god of classical Greek mythology). Originally developed in theology for the interpretation of Biblical texts, hermeneutics was applied by Thomas Dilthey in the 19th century to philosophy more widely.

It is often assumed that an utterance, picture, piece of writing, etc. (collectively referred to as text) has a single absolute meaning, and that if we only stare at the text long enough, the one true meaning will emerge. However, it is clear that the meaning of a piece of text can vary according to its context, and even in the same context, a piece of text might have different meanings for different readers. For example, if a student’s work is praised by her teacher, the student might interpret this as indicating that the work is a significant achievement of which the student should be proud. However, if the student’s experience of the teacher is that praise is used routinely and without sincerity, then the interpretation of exactly the same words might be quite different (Brophy, 1981). The text (in this case the praise) will be interpreted differently in different contexts, and by different readers (e.g., students). These three key ideas—text, context, and reader—are said to form the hermeneutic circle.

In educational research the text is usually just data. Harding (1987) has suggested that “one could reasonably argue that all evidence-gathering techniques fall into one of the following three categories: listening to (or interrogating) informants, observing behavior, or examining historical traces and records” (p. 2).

Sometimes the fact that the data have to be elicited is obvious, as when we sit down with people and ask them some questions and tape-record the responses. At other times this elicitation process is less obvious. If we are in a classroom observing and making notes on a teacher’s or students’ actions, it does not feel as if we are eliciting evidence. It feels much more like a process in which the evidence presents itself to us. However, the things we choose to make notes about, and even the things that we observe (as opposed to those we see), depend on our personal theories about what is important. In other words, all data are, in some sense, elicited. This is true even in the physical sciences where, as the physicist Werner von Heisenberg remarked: “What we learn about is not nature itself, but nature exposed to our methods of questioning” (quoted in Johnson, 1996, p. 147).

For some forms of evidence, the process of elicitation is the same as the process of recording. If we ask a school for copies of its policy documents in a particular area, all the evidence we elicit comes to us in permanent form. However, often much of the evidence that is elicited is ephemeral, and only some of it gets recorded. We might be interviewing someone who is uncomfortable with the idea of speaking into an audio tape recorder, and so we have to rely on note taking. Even if we do audio-tape an interview, this will not record changes in the interviewee’s posture that might suggest a different interpretation of what is being said from that which might be made without the visual evidence. The important point here is that what is taken as evidence is relative to the researcher’s interests and perspectives and necessarily involves interpretation.

Research based on approaches derived from the physical sciences (often called positivistic approaches, named after a school of philosophy of science popular in the second quarter of the last century), emphasizes text at the expense of context and reader. The same educational experiment is assumed to yield substantially the same results were it to be repeated elsewhere (e.g., in another school), and that different people reading the results would be in substantial agreement about the meaning of the results. Other approaches will give more or less weight to the role played by context and reader. For example, an ethnography will place much greater weight on the context in which the evidence is generated than would be the case for more positivistic approaches to educational research, but would build in safeguards that different readers would share, as far as possible, the same interpretations. In contrast, a teacher researching her own classroom might pay relatively little attention to the need for the meanings of her findings either to be applicable elsewhere (so that generalizability across context is not a concern), or whether her interpretations are agreed by others (so that generalization across readers is not a concern). For her, the meaning of the evidence solely in her own classroom might well be paramount.

In what sense, then, can the results of research in mathematics education—and particularly those emerging from communicative, rather than scientific, rationalist epistemologies—be regarded as “knowledge”? The traditional definition of knowledge is that it is simply “justified true belief” (Griffiths, 1967). In other words, we can be said to know something if we believe it, if it is true, and if we have a justification for our belief. There are at least two difficulties with applying this definition to research in mathematics education.

The first is that it is now acknowledged that there are severe difficulties in establishing what constitutes a justification or a “warrant” for belief (Kitcher, 1984). The second is that these problems are compounded in the social sciences because the chain of inference might have to be probabilistic, rather than deterministic. In this case, our inference may be justified, but not true. 5

An alternative view of knowledge, based on Goldman’s (1976) proposals for the basis of perceptual knowledge, offers a partial solution to the problem. The central feature of his approach is that knowing something is, in essence, the ability to eliminate other rival possibilities. For example, if a person (let us call her Diana) sees what she believes to be a book in a school, then we are likely to say that Diana knows it is a book. However, if we know (but Diana does not) that students at this school are expert in making replica books that, to all external appearances, look like books but are solid and cannot be opened, then with a justified-true-belief view of knowledge, we would say that Diana does not know it is a book, even if it happens to be one because her belief is not warranted. With such a view of knowledge, it is almost impossible for anyone to know anything.

Goldman’s solution to this dilemma is that Diana knows that the object she is looking at is a book if she can distinguish it from a relevant possible state of affairs in which it is not a book. In most cases, the possibility that the book-like object in front of Diana might not be a book is not a relevant state of affairs (because not many schools go around making such replicas), and so we would say that Diana does know it is a book. However, in our particular case there is a relevant alternative state of affairs—the book might be a dummy or it might be genuine. Since Diana’s current state of knowledge (i.e., before she picks it up and tries to open it) does not allow her to distinguish between these two possibilities, we would say that Diana does not know.

Applying this to research in mathematics education, we would say that we know something when we have evidence that supports our inference, and that we have ways of discounting relevant alternative interpretations of our data. Aspects of this are built into traditional experimental designs—for example, in trying out new educational treatments, we might randomize assignment to treatment and control groups to head off the rival interpretation that the higher test scores of the treatment group were due to factors unrelated to the treatment. However, much of the debate in research in mathematics education concerns what is to count as a relevant alternative interpretation. For example, some researchers claim that the poorer performance of females on some mathematics tests indicates a lower level of ability in mathematics, while others would attribute these differences to the gendered nature of the particular definition of mathematics underlying the tests, or teaching styles that were more suited to males than to females (Boaler, 1997).

In practice, we suggest, this is determined not by any absolute criteria of what interpretations should and should not be counted as relevant but by the consensus of some community of practice (Lave & Wenger, 1991) be it teachers, researchers, or politicians (and, of course, different communities will come to different conclusions about what is relevant).

This is true in the physical sciences as much as in the social sciences. For example, Collins and Pinch (1993) describe the investigations following Joseph Weber’s claim in 1969 to have discovered gravitational radiation. The traditional view of the philosophy of science would have us believe that the claim was subjected rigorously to investigation and refutation but the question of the existence of gravitational radiation was not settled by empirical means.

Between 1969 and 1975, there were six major attempts to replicate the original findings each of which was unsuccessful. Weber then pointed out methodological flaws in each of the unsuccessful attempts providing plausible rival interpretations—that is, that the results were due to defects in the experimental procedure. In fact, Weber’s critics also found flaws in five out of the six experiments. A scientific rationalist’s perspective would require at this point that the experiments be repeated, correcting the previous flaws, in order to see whether Weber’s results could be replicated. But this didn’t happen. Weber’s rival interpretations of the experimental results have been rejected by the community not on rationalist grounds but because Weber’s interpretations of the results are not considered relevant or plausible (Collins & Pinch, 1993, p. 107).

Sometimes what is and is not to be regarded as a plausible rival interpretation is made absolutely explicit, in the form of a theoretical stance. In other words, a researcher might say “because I am working from this theoretical basis, I interpret these results in the following way, and I do not consider that alternative interpretation to be plausible.” A good example of this is the convention that any interpretation of an experimental result that has a probability of less than 1 in 20 is rejected in the logic of statistical significance testing. More often, however, communities of researchers operate within a shared discourse that rules out some alternative hypotheses, even though the assumptions are implicit and are often unrecognized.

Such a process can never be finally completed, and therefore knowledge can only be provisional rather than absolute. With this view of knowledge it is clear that there can never be a recipe for generating knowledge, and knowledge is more or less reliable according to the strengths of warrants for the preferred interpretation, and the assiduousness with which alternative interpretations have been pursued.

To sum up so far, we have argued that solutions to educational questions require the consideration of both the traditional, decontextualized knowledge produced by approaches espoused within scientific rationalism and also a knowledge of the contextual and human factors that are required if potential courses of future action are to be realized in classrooms. The prior beliefs and previous experiences of those involved influence both the amount and kind of evidence that must be marshaled in support of the claim being made, and also the extent and nature of alternative interpretations that must be explored.

The foregoing analysis has demonstrated that what might count as evidence in the production of knowledge is far more complex and varied than is usually acknowledged, and this multiplicity of forms of evidence creates its own difficulties. For this reason, the next section of this chapter deals with a typology of forms of evidence developed by C. West Churchman that leads to a systematization of different ways of building knowledge.

Churchman’s Classification of Systems of Inquiry

Churchman (1971) classified all systems of inquiry into five broad categories, each of which he labeled with the name of a philosopher (Leibniz, Locke, Kant, Hegel, and Singer) he felt best exemplified the stance involved in adopting the system. He gave particular attention in his classification to what is to be regarded as the primary or most salient form of evidence, as summarized in Table 3.1 (these are discussed in turn below). For detailed accounts of Churchman’s classification scheme see Churchman, 1971; Messick, 1989; Mitroff & Kilmann, 1978; and Mitroff & Sagasti, 1973.

Table 3.1   Source of evidence for five inquiry systems

Inquiry system

Source of evidence

Leibnizian

Reasoning

Lockean

Observation

Kantian

Representation

Hegelian

Dialectic

Singerian

Ethical values and practical consequences

Churchman’s framework is particularly useful in thinking about how to conduct research that makes a difference, and specifically, whether the research moves people to appropriate action. It does so by posing three questions that we should attempt to answer about our research efforts:

  1. Are the claims we make about our research based on inferences that are warranted on the basis of the evidence we have assembled?
  2. Are the claims we make based on convincing arguments that are more warranted than plausible rival claims?
  3. Are the consequences of our claims ethically and practically defensible?

In the following discussion, we describe his framework by considering how it might be applied to a real research question in mathematics education.

The current controversy over reform versus traditional mathematics curricula has attracted a great deal of attention in the United States and elsewhere among educators, professional mathematicians, politicians, and parents and can serve to illustrate how these three questions might be used.

For some, the issue of whether the traditional or reform curricula provide the most appropriate means of developing mathematical competence is an issue that can be settled on the basis of logical argument. On one side, the proponents of reform curricula might argue that a school mathematics curriculum should resemble the activities of mathematicians, with a focus on the processes of mathematics. On the other side, the anti-reform movement might argue that the best preparation in mathematics is one based on skills and procedures. For example, a report by the London Mathematical Society, the Institute of Mathematics and Its Applications and the Royal Statistical Society (1995) argues that “To gain a genuine understanding of any process it is necessary first to achieve a robust technical fluency with the relevant content” (p. 9).

Despite their opposing views, both these points of view rely on rhetorical methods to establish their position, in an example of what Churchman called a Leibnizian inquiry system. In such a system, certain fundamental assumptions are made, from which deductions are made by the use of formal reasoning rather than by using empirical data. In a Leibnizian system, reason and rationality are held to be the most important sources of evidence. Although there are occasions in educational research when such methods might be appropriate, they usually are not sufficient. In fact, typically the educational research community requires some sort of evidence from the situation under study (usually called empirical data).

The most common use of data in inquiry in both the physical and social sciences is via what Churchman calls a Lockean inquiry system. In such an inquiry, evidence is derived principally from observations of the physical world. Empirical data are collected, and then an attempt is made to build a theory that accounts for the data. This corresponds to what is sometimes called a “naive inductivist” paradigm in the physical sciences. Consider the following scenario.

A team of researchers, composed of the authors of a reform-minded mathematics curriculum and classroom teachers interested in using that curriculum, decide after considerable discussion and reflection to design a study in which grade 9 students are randomly assigned either to classrooms that will use the new curriculum or to those that will use the traditional curriculum. The research team’s goal is to investigate the effectiveness 6 (with respect to student learning) of the two curricula over the course of the entire school year. Suppose further that the research design they developed is appropriate for the sort of research they are intending to conduct.

From the data the team will gather, they hope to be able to develop a reasonable account of the effectiveness of the two curricula, relative to whatever criteria are agreed upon, and this account could lead them to draw certain conclusions (i.e., inferences). Were they to stop here and write a report, they would essentially be following a scientific rationalist approach situated in a Lockean perspective.

The major difficulty with a Lockean approach is that, because observations are regarded as evidence, it is necessary for all observers to agree on what they have observed. Because what we observe is based on the theories we have, different people will observe different things, even in the same classroom.

For less well-structured questions, or where different people are likely to disagree what precisely is the problem, a Kantian inquiry system is more appropriate. This involves the deliberate framing of multiple alternative perspectives, on both theory and data (thus subsuming Leibnizian and Lockean systems). One way of doing this is by building different theories on the basis of the same set of data. Alternatively, we could build two (or more) theories related to the problem, and then for each theory, generate appropriate data (different kinds of data might be collected for each theory).

For our inquiry into the relative merits of traditional and reform curricula, our researchers might not stop with the “crucial experiment” described above, but instead, would consider as many alternative perspectives as possible (and plausible) about both their underlying assumptions and their data. They might, for example, challenge one or more of their assumptions and construct competing explanations on the basis of the same set of data. These perspectives would result in part from their engagement in serious reflection about their underlying assumptions, and in part from submitting their data to the scrutiny of other persons who might have a stake in the research, for example, teachers who taught using the traditional curriculum. An even better approach would be to consider two or more rival perspectives (or theories) while designing the study, thereby possibly leading to the generation of different sets of data. For example, a study designed with a situated cognition (or situated learning) perspective in mind might result in a very different set of data being collected than a study based on contemporary cognitive theory (see Anderson, Reder, & Simon, 1997; Greeno, 1997). 7 These two different perspectives would also probably lead the researchers to very different explanations for the results (Boaler, 2000). For example, the partisans of the situated cognition perspective might attribute results favoring the reform curriculum to certain aspects of the social interactions that took place in the small groups (an important feature of the reform curriculum), whereas cognitivists might claim that it was the increased level of individual reflection afforded by the new curriculum materials, rather than the social interaction, that caused the higher performance among students who were in the reform classrooms.

The different representations of traditional and reform classrooms developed within a Kantian inquiry system may not be reconcilable in any straightforward sense. It may not be immediately apparent where these theories overlap and where they conflict, and indeed, these questions may not be meaningful, in that the enquiries might be incommensurable (Kuhn, 1962). However, by analyzing these enquiries in more detail, it may be possible to begin a process of theory building that incorporates the different representations of the situation under study.

This idea of reconciling rival theories is more fully developed in a Hegelian inquiry system, where antithetical and mutually inconsistent theories are developed. Not content with building plausible theories, the Hegelian inquirer takes a plausible theory, and then investigates what would have to be different about the world for the exact opposite of the most plausible theory itself to be plausible. The tension produced by confrontation between conflicting theories forces the assumptions of each theory to be questioned, thus possibly creating a co-ordination of the rival theories.

In our example, the researchers should attempt to answer two questions: (1) What would have to be true about the instruction that took place for the opposite of the situated learning explanation to be plausible? and (2) What would have to be true about the instruction that took place for the opposite of the cognitivist explanation to be plausible? If the answers to both these questions are “not very much,” then this suggests that the available data under-determine the interpretations that are made of them. This might then result in sufficient clarification of the issues to make possible a co-ordination, or even a synthesis, of the different perspectives, at a higher level of abstraction.

The differences between Lockean, Kantian, and Hegelian inquiry systems were summed up by Churchman as follows:

The Lockean inquirer displays the “fundamental” data that all experts agree are accurate and relevant, and then builds a consistent story out of these. The Kantian inquirer displays the same story from different points of view, emphasizing thereby that what is put into the story by the internal mode of representation is not given from the outside. But the Hegelian inquirer, using the same data, tells two stories, one supporting the most prominent policy on one side, the other supporting the most promising story on the other side. (1971 p. 177)

However, perhaps the most important feature of Churchman’s typology is that we can inquire about inquiry systems, questioning the values and ethical assumptions that these inquiry systems embody. This inquiry of inquiry systems is itself, of course, an inquiry system, termed Singerian by Churchman after the philosopher E. A. Singer (see Singer, 1957). Such an approach entails a constant questioning of the assumptions of inquiry systems. Tenets, no matter how fundamental they appear to be, are themselves to be challenged in order to cast a new light on the situation under investigation. This leads directly and naturally to examination of the values and ethical considerations inherent in theory building.

In a Singerian inquiry, there is no solid foundation. Instead, everything is permanently tentative; instead of asking what “is,” we ask what are the implications and consequences of different assumptions about what “is taken to be”:

The “is taken to be” is a self-imposed imperative of the community. Taken in the context of the whole Singerian theory of inquiry and progress, the imperative has the status of an ethical judg ment. That is, the community judges that to accept its instruction is to bring about a suitable tac tic or strategy.… The acceptance may lead to social actions outside of inquiry, or to new kinds of inquiry, or whatever. Part of the community’s judgement is concerned with the appropriate ness of these actions from an ethical point of view. Hence the linguistic puzzle which bothered some empiricists—how the inquiring system can pass linguistically from “is” statements to “ought” statements—is no puzzle at all in the Singerian inquirer: the inquiring system speaks exclusively in the “ought,” the “is” being only a convenient façon de parler when one wants to block out the uncertainty in the discourse. (Churchman, 1971 p. 202; our emphasis in fourth sentence)

An important consequence of adopting a Singerian perspective is that with such an inquiry system, one can never absolve oneself from the consequences of one’s research. Educational research is a process of modeling educational processes, and the models are never right or wrong, merely more or less appropriate for a particular purpose, and the appropriateness of the models has to be defended. It is only within a Singerian perspective that the third of our key questions (Are the consequence of our claims ethically and practically defensible?) is fully incorporated. Consider the following scenario.

After studying the evidence obtained from the study, the research team has concluded that the reform curriculum is more effective for grade 9 students. Furthermore, this conclusion has resulted from a consideration of various rival perspectives. However, a sizable group of parents strongly opposes the new curriculum. Their concerns stem from beliefs that the new curriculum engenders low expectations among students, de-emphasizes basic skills, and places little attention on getting correct answers to problems. The views of this group of parents, who happen to be very active in school-related affairs, have been influenced by newspaper and news magazine reports raising questions about the new curricula, called “fuzzy math” by some pundits. To complicate matters further, although the teachers in the study were “true believers” in the new curriculum, many of the other mathematics teachers in the school district have little or no enthusiasm about changing their traditional instructional practices or using different materials, and only a few teachers have had any professional development training in the implementation of the new curriculum.

Before they begin to publicize their claims, the research team is obliged to consider both the ethical and practical issues raised by concerns and realities such as those presented above. Is it sensible to ask teachers to implement an instructional approach that will be challenged vigorously by some parents and perhaps others? Can they really claim, as the school district superintendent desires, that student performance on state mathematics tests will improve if the new curriculum is adopted? Are they confident enough in their conclusions about the merits of the new curriculum to recommend its use to inexperienced teachers? Should they encourage reluctant or resistant teachers to use this approach in their own classrooms if they may do so half-heartedly or superficially? Can these reluctant teachers be expected to implement this new curriculum in a manner consistent with reform principles? These sorts of ethical and practical questions are rarely addressed in research in mathematics education, but must be addressed if the researchers really care about moving the school district to act on their conclusions. Answers to questions such as these will necessitate prolonged dialogue with various groups, among them teachers, school administrators, parents, and students. In such dialogue, what is “reasonable” is likely to be a far more useful guide that what is “rational” (Toulmin, 2001).

Implicit in the Singerian system of inquiry is consideration of the practical consequences of one’s research, in addition to the ethical positions. Greeno (1997) suggests that educational researchers should assess the relative worth of competing (plausible) perspectives by determining which perspective will contribute most to the improvement of educational practice and we would add that this assessment must take into account the constraints of the available resources (both human and financial), the political and social contexts in which education takes place, and the likelihood of success. While the Lockean, Kantian, and Hegelian inquirer can claim to be producing knowledge for its own sake, Singerian inquirers are required to defend to the community not just their methods of research, but which research they choose to undertake.

Singerian inquiry provides a framework within which we can conduct a debate about what kinds of research ought to be conducted. Should researchers work with individual teachers supporting them to undertake research primarily directed at transforming their own classrooms, or should researchers instead concentrate on producing studies that are designed from the outset to be widely generalizable? Within a Singerian framework, both are defensible, but the researchers should be prepared to defend their decisions. The fact that the results of action research are often limited to the classrooms in which the studies are conducted is often regarded as a weakness in traditional studies. Within a Singerian framework, however, radical improvements on a small-scale may be regarded as a greater benefit than a more widely distributed, but less substantial improvement.

We introduced this chapter by stating that the first part dealt with how (and for whom) research in mathematics is undertaken, while the second focused on what counted as evidence. As should be clear from the foregoing analysis, we do not believe that such a distinction is, in fact, tenable. Research in mathematics education, as in any other field, is an integrated, and ultimately moral, activity that can be characterized as a never-ending process of assembling evidence that:

  1. Particular inferences (i.e., claims) are warranted on the basis of the available evidence;
  2. Such inferences are more warranted than plausible rival inferences;
  3. The consequences of such inferences are ethically and practically defensible (cf., Wiliam, 1998).

Furthermore, the basis for warrants, other plausible interpretations, and the ethical and practical bases for defending the consequences are constantly open to scrutiny and question.

Unfortunately, only rarely, in our experience, has any of the published mathematics education research included any significant attention to a discussion of rival inferences, and even more rarely have researchers addressed in their reports issues related to the ethical and practical defensibility of the claims they make. 8

Closing Thoughts

Philosopher Richard Rorty (1979) offers a point of departure for conceptualizing the dialogues that take place (a) within the research community, (b) within the community of practitioners, and (c) between these two groups. 9 Specifically, Rorty embraces post-modern philosophy as one voice in the ongoing conversation about what it means to be human. Within this conversation, he distinguishes between analytical philosophy and hermeneutic philosophy. In an analytic endeavor, the participants are seeking to extend a scientific rationalistic account of some phenomenon and may indeed conceive of themselves as producing eternal knowledge. In hermeneutic activity, the conversants seek only to steer the conversation in ways that enable people to better cope with some phenomenon in the present, not to establish an eternal body of knowledge. 10 This form of discourse is essential to the development of ethically informed, reasoned conversation between researchers and practitioners about issues that are fundamental to teaching and learning mathematics in contemporary society.

Also, anthropologist Mary Catherine Bateson (1994) presents a moving vision of learning to which we might turn for inspiration. No single framework anchors learning in her account. She finds discourse based solely on abstract concepts inadequate to the challenge of understanding specific lived experience. Drawing upon several cases in which multiple diverse perspectives on shared experiences led her to deeper insights, she argues convincingly that “[i]nsight … refers to that depth of understanding that comes by setting experiences, yours and mine, familiar and exotic, new and old, side by side, learning by letting them speak to one another” (p.14; emphasis in original). For Bateson, it is in the boundaries between what two or more people have to say about a common experience that real learning takes place.

In this chapter we have outlined why we believe that without a radical shift in its orientation, research in mathematics education is unlikely to influence practice, and we have also argued that such an outcome is indefensible. We have suggested some possible ways in which to enhance communication among researchers, teachers, and other practitioners and, consequently, to do research that will move us—teachers, school administrators, curriculum developers, teacher educators, etc.—to action. The likelihood that this will happen will be increased if the conversation about the focus of our research is expanded in a rich and complete manner paying attention to the multiple meanings and interpretations (including beliefs and assumptions) brought to the discussion by each participant in the conversation.

Notes

Zinman (1978) has insisted that “the primary foundation for belief in science is the widespread impression that it is objective.” By “objective” he meant “knowledge without a knower: it is knowledge without a knowing subject” (p. 107, emphasis in original).

Concern about the ineffectual nature of mathematics education research is anything but new. In a 1971 article in the American Mathematical Monthly, Walbesser and Eisenberg argued against the emphasis in doctoral programs on experimental research: “The consequences of such training are obvious when one examines mathematics education dissertations. Too often the dissertation concerns the investigation of a trivial problem cloaked in an elegant statistical design” (Walbesser & Eisenberg, 1971, p. 668). For a more recent discussion of the impact of research on mathematics education, see Wiliam (2003).

Such an approach does not assert that there is no such thing as the physical world, but merely that the world is not “knowable” in any absolute sense. As Roger Shattock has remarked, “Words do not reflect the world, not because there is no world, but because words are not mirrors” (quoted in Burgess, 1992, p. 119).

Ken Ruthven, at the time editor of Educational Studies in Mathematics, has proposed that in the United States “internally-focused—to the research community—concerns of epistemology and methodology [have seemed] predominant.… [Whereas] in the UK … the burning questions are externally-focused—touching on the credibility of the research community and its capacity not just to influence but to make a productive contribution to policy and practice (K. Ruthven, personal e-mail communication with F. Lester, December 3, 1998). We suggest that the situation in the UK may not be as sanguine as Ruthven would have us believe.

We wish not to engage in a discussion of what it means for something to be true. We do find, however, von Glasersfeld’s notion of viability an appropriate alternative to the notion that there is only truth that describes the world. For him, a thing (theory, model, concept, etc.) is viable if it proves to be adequate in the context in which it was developed (von Glasersfeld, 1995).

The notion of “effectiveness” is a thorny issue because effectiveness is determined by what is valued. Thus, it is possible that each curriculum might be judged the more effective depending on the research team’s value judgments.

Cobb and Bower (1999) consider the potential contributions of the situated learning and cognitive perspectives to teaching practice “by contrasting their differing formulations of the relationship between theory and practice” (p. 4).

We are not alone in our call for attention to rival inferences. Paul Cobb (2007) provides a compelling discussion of how researchers can cope with rival, even incommensurable, theoretical perspectives. Cobb suggests that “rather than adhering to one particular theoretical perspective, we [should] act as bricoleurs by adapting ideas from a range of theoretical sources” (cf. Gravemeijer, 1994)

The distinction between researchers and practitioners has been becoming increasingly blurred as more and more research is being conducted by teacher-researchers and by teacher-researcher collaborative teams.

Flyvbjerg (2001) makes a similar distinction, building on Aristotle’s distinction between episteme (eternal truth) and phronesis (practical wisdom), and argues that social science should be driven by concerns of value rationality rather than analytic rationality.

References

Anderson, J. R. , Reder, L. M. , & Simon, H. A. (1996). Situated learning and education. Educational Researcher, 25(4), 5–11.
Bateson, M. C. (1994). Peripheral visions: Learning along the way. New York: Harper Collins.
Bishop, A. J. , Clements, K. , Keitel, C. , Kilpatrick, J. , & Laborde, C. (Eds.). (1996). International handbook of mathematics education (2 volumes). Dordrecht, The Netherlands: Kluwer.
Boaler, J. (1997). Experiencing school mathematics: teaching styles, sex and setting. Buckingham, UK: Open University Press.
Boaler, J. (2000). Exploring situated insights into research and learning. Journal for Research in Mathematics Education, 31(1), 113–119.
Brophy, J. (1981). Teacher praise: a functional analysis. Review of Educational Research, 51(1), 5–32.
Burgess, J. P. (1992). Synthetic physics and nominalist realism. In C. W. Savage & P. Ehrlich (Eds.), Philosophical and foundational issues in measurement theory (pp. 119–138). Hillsdale, NJ: Erlbaum.
Chalmers, A. F. (1978). What is this thing called science? Milton Keynes, UK: Open University Press.
Churchman, C. W. (1971). The design of inquiring systems: Basic concepts of system and organization. New York: Basic Books.
Cobb, P. , & Bower, J. (1999). Cognitive and situated learning perspectives in theory and practice. Educational Researcher, 28(2), 4–15.
Cobb, P. (2007). Putting philosophy to work: Coping with multiple theoretical perspectives. In F. Lester (Ed.), Handbook of research on mathematics teaching and learning (2nd ed.). Greenwich, CT: Information Age.
Collins, H. M. , & Pinch, T. (1993). The golem: What everyone should know about science. Cambridge, UK: Cambridge University Press.
Egsgard. J. (1978). Message from the President. Journal for Research in Mathematics Education, 9(3), 240–241.
Flyvbjerg, B. (2001). Making social science matter: why social inquiry fails and how it can succeed again. Cambridge, UK: Cambridge University Press.
Glaser, E. M. , Abelson, H. H. , & Garrison, K. N. (1983). Putting knowledge to use: Facilitating the diffusion of knowledge and the implementation of planned change. San Francisco, CA: Jossey-Bass
Goldman, A. I. (1976). Discrimination and perceptual knowledge. Journal of Philosophy, 73(20), 771–791.
Gravemeijer, K. (1994). Educational development and developmental research. Journal for Research in Mathematics Education, 25, 443–471.
Greeno, J. G. (1997). On claims that answer the wrong questions. Educational Researcher, 26(1), 5–17.
Griffiths, A. P. (Ed.). (1967). Knowledge and belief. Oxford, UK: Oxford University Press.
Grouws, D. A. (Ed.) (1992). Handbook of research on mathematics teaching and learning. New York: Macmillan.
Harding, S. (1987). Introduction: Is there a feminist method? In S. Harding (Ed.), Feminism and methodology: Social science issues (pp. 1–14). Bloomington, IN and Milton Keynes, UK: Indiana University Press/Open University Press.
Hargreaves, D. H. (1998, August). The knowledge-creating school. Paper presented at the annual meeting of the British Educational Research Association, Belfast, Northern Ireland.
Hlavaty, J. H. (1970). Message from the President. Journal for Research in Mathematics Education, 1(1), 7.
Johnson, G. (1996). Fire in the mind: Science, faith and the search for order. London: Viking.
Kitcher, P. (1984). The nature of mathematical knowledge. New York: Oxford University Press.
Kluger, A. N. , & A. DeNisi (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention Theory. Psychological Bulletin, 119(2): 254–284.
Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.
Lacey, C. (1970). Hightown Grammar: The school as a social system. Manchester, UK: Manchester University Press.
Lave, J. and E. Wenger (1991). Situated learning: legitimate peripheral participation. Cambridge, UK: Cambridge University Press.
Lester F. K. , & Lambdin D. V. (2003). The professionalization of the mathematics education research community in the United States. In G. M. A. Stanic & J. Kilpatrick (Eds.), A recent history of mathematics education in the United States and Canada. Reston, VA: National Council of Teachers of Mathematics.
London Mathematical Society, Institute of Mathematics and Its Applications, & Royal Statistical Society (1995). Tackling the mathematics problem. London: London Mathematical Society.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (pp. 13–103). Washington, DC: American Council on Education/Macmillan.
Mitroff, I. L. , & Kilmann, R. H. (1978). Methodical approaches to social science. San Francisco: Jossey-Bass.
Mitroff, I. L. , & Sagasti, F. R. (1973). Epistemology as general systems theory: An approach to the design of complex decision-making experiments. Philosophy of Social Sciences, 3, 117–134.
Nonaka, I. & Tageuchi, H. (1995). The knowledge-creating company. Oxford, UK: Oxford University Press.
Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.
Ryle, G. (1949). The concept of mind. New York: Barnes & Noble.
Schwandt, T. A. (1995, November). Justifying claims in evaluation: Something old, something new? Paper prepared as a response to the panel, “Justifying Claims in Evaluation,” at the joint meeting of the American Evaluation Association and the first International Evaluation Conference, Vancouver, British Columbia.
Schwandt, T. A. (1996). Indexing the practice of reasoning in evaluation. Unpublished manuscript. Available from the author, School of Education, Indiana University, Bloomington, Indiana.
Shotter, J. (1993). Conversational realities: Constructing life through language. Thousand Oaks, CA: Sage.
Sierpinska, A. , & Kilpatrick, J. (Eds.) (1998). Mathematics education as a research domain: A search for identity (2 volumes). Dordrecht, The Netherlands: Kluwer.
Singer, E. A., Jr. (1959). Experience and reflection. Philadelphia: University of Pennsylvania Press.
Toulmin, S. (2001). Return to reason. Cambridge, MA: Harvard University Press.
Von Glasersfeld, E. (1995). A constructivist approach to teaching. In L. P. Steffe & J. Gale (Eds.), Constructivism in education (pp. 3–15). Hillsdale, NJ: Erlbaum.
Walbesser, H. H. , & Eisenberg, T. (1971). What research competencies for the mathematics educator? American Mathematical Monthly, 58(June–July), 667–673.
Wiliam, D. (1998). A framework for thinking about research in mathematics and science education. In J. A. Malone , B. Atweh , & J. R. Northfield (Eds.), Research and supervision in mathematics and science education (pp. 1–18). Mahwah, NJ: Erlbaum.
Wiliam, D. (2003). The impact of educational research on mathematics education. In A. Bishop , M. A. Clements , C. Keitel , J. Kilpatrick , & F. K. S. Leung (Eds.), Second international handbook of mathematics education (pp. 469–488). Dordrecht, The Netherlands: Kluwer Academic.
Zinman, J. (1978). Reliable knowledge: An exploration of the grounds for belief in science. Cambridge, UK: Cambridge University Press.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.