What are we trying to achieve when we teach Geometry for Teachers?



One of the goals of the GeT: A Pencil community is to improve the instructional capacity for high school geometry. Geometry courses for teachers can be instrumental in preparing teachers with that instructional capacity. It is pertinent to ask what evidence we can use to steer that mission. Unlike in the K-12 environment where curriculum development and implementation are used to ensure that instruction meets standards, the culture of college instruction is one founded on academic freedom; and instructors often take pride in the development of their own course materials. In this context where instructors may be doing different things, it is important to understand what could be meant by improvement and how we could know that we are improving instructional capacity. 

The approach to improvement science espoused by Bryk et al. (2015) counters the usual paradigm of evaluation research, often focused on establishing the main effects of interventions, controlling for implementation fidelity. Bryk et al. (2015) consider it sensible that interventions will vary across sites as they attend to characteristics of their context. This aligns well with the situation in which each instructor of Geometry for Teachers designs and implements their course: Instructors know the students they usually have, their mathematical backgrounds, and other elements of their professional preparation. It would not be sensible to try to make all Geometry for Teachers courses alike. 

However, based on analyses of healthcare operations, Bryk et al. (2015) propose that an alternative guide for improvement would try to reduce the variability in the outcomes of education interventions. In healthcare, outcomes might include various measures of patients’ health, such as the time-to-hospital-discharge by condition treated or the number of changes in treatment needed to achieve recovery. In terms of time to discharge, for example, we know that recovery depends not only on the efficiency of medical teams, but also on the condition and the patient’s comorbidities. Thus, to say that a healthcare provider is doing good quality work, it would not be sensible to expect the time to discharge to reduce to 0 or for all patients to take the same amount of time to recover. Yet, if predictions of the time to discharge gave a very large time interval, this large variability might suggest a possible focus for improvement. Instructional capacity for geometry teaching may also be seen as amenable for this kind of improvement. We know that there will always be things a teacher needs to know that they did not learn in our courses; we also know that there will always be things they had the opportunity to learn in our courses and yet they didn’t. How can we think about reducing the variability of outcomes of geometry courses for teachers in ways that allow us to gauge the improvement of our collective efforts to increase instructional capacity? What are some options for outcome variables whose variability we could aim to reduce?

One way of assessing the variability of outcomes could be to count credits among graduates, such as how many geometry courses a teacher had in their preparation. In the past, the number of mathematics courses a teacher had taken was believed to have an influence on teacher performance, but research has not been conclusive (Begle, 1979; Monk, 1994). In our experience surveying practicing high school geometry teachers, however, this number has shown very little variability to begin with, to the point that it does not even make sense to ask what its effects are on other teacher variables (e.g., the amount of mathematical knowledge for teaching). 

A second way of assessing the reduction in variability of outcomes comes from the availability of scores in our MKT-G test (Herbst & Kosko, 2014; Ko & Herbst, 2020). The GRIP lab has surveyed a nationally distributed sample of practicing high school teachers using this instrument. We have also been using the MKT-G test at the beginning and at the end of the GeT course for students of instructors of the GeT: A Pencil community. As a result of implementing the test for several semesters and across several courses, we have gotten a sense of how much mathematical knowledge for teaching geometry students have when they start the course and how much they have when they end. On average they start at a score -1.10 and they end at -0.92 both below the standardized mean of practicing geometry teachers. Our current data suggests that the average students’ increase in MKT-G is about 0.18 standard deviations. As experience teaching geometry correlates with MKT-G scores, that increase of 0.18 SDs is equivalent to the growth that a teacher would have in 2.5 years of experience. While one way to think about improvement might urge us to try and increase that difference beyond 0.18 SD, we also know that such an increase is likely to be bounded as there is only so much learning that can happen in a semester.    

The improvement approach would suggest that we look instead at whether the variability in outcomes reduces over time. For example, when students arrive at our GeT courses, what they know of geometry for teaching may vary widely depending on their prior experiences. At the end of the course, we would expect everybody to know more. But rather than only looking at this average growth, we could also look at the variability of individual growth. What is the variation in growth among the students in a class over the semesters? What is the variation in growth among the students of instructors in our community over the semesters? While increasing the amount of MKT-G score improvement is desirable, reducing the variability among those increases could help us argue that geometry for teachers courses are associated with predictable gains among individual prospective teachers. 

Yet the MKT-G test is built on a conception of instructional capacity to do tasks of teaching high school geometry, in the context of instructional situations from the high school geometry course. As not all GeT courses are focused on that material, the MKT-G is not necessarily aligned with a shared conception of desirable outcomes by instructors of the course. It is reasonable to look for reductions in the variability in the growth of MKT-G scores but that by itself might not have the key to how to increase instructional capacity. 

The recent effort by the Teaching GeT working group to develop a shared list of student learning outcomes (SLOs) helps move toward understanding what variability in outcomes we might want to aim to reduce. In this issue of the newsletter, Nat Miller introduces the effort and lists the student learning outcomes the Teaching GeT group developed over the 2019-2020 academic year. Another note, by Sharon Vestal, provides a commentary on the first of these SLOs. We are eager to publish commentaries on all of these SLOs, as well as written responses to published elaborations, or possibly complementary elaborations. The effort to identify SLOs rests on the notion that while we might be foolish to expect all GeT courses to be the same, mathematics departments in high schools, parents, and high school students are entitled to expect their geometry teachers to have some competencies. GeT instructors could choose many materials and pedagogical approaches to achieve those outcomes and as long as those outcomes are achieved, our community could stand behind any one of our graduates. But in order to be able to assess our improvement using these SLOs, it is really important to develop consensus on these SLOs. We hope the notes included in this newsletter and the following ones will help us move toward such consensus. It would then be sensible to survey students in regard to whether they have had, in their GeT courses, opportunities to learn aligned with each SLO. The notion of improvement in the sense of reduction in the variability of outcomes could be understood as having more and more students indicate that they have had opportunities to learn the same SLOs even if the ways in which those opportunities were provided varied. 

As we develop consensus, it also seems important for us to try to integrate the developing SLOs into the framework of the MKT-G test, as we can use successive administrations of the test to calibrate and phase in new items that, over time, might also inform the assessment of growth among GeT students. MKT-G items usually pose a mathematics problem in the context of a task that a high school geometry teacher may need to do. Creating problems for their students, preparing materials for lessons, understanding what students do in response to problems, crafting explanations for key ideas, providing definitions, etc. are tasks a teacher has to do routinely. We hope that as the SLOs develop we might also hear suggestions as to how to create assessment items that might tap into the knowledge named in the SLOs. We anticipate that those items might help bring closer together the objectives GeT instructors have for their courses and the knowledge needed for teaching high school geometry. This effort may therefore help us track more accurately how we are improving instructional capacity for high school geometry. 

References

Begle, E.G. (1979). Critical variables in mathematics education: Findings from a survey of the empirical literature. Washington, DC: Mathematical Association of America and National Council of Teachers of Mathematics.

Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press.

Herbst, P., & Kosko, K. (2014). Mathematical knowledge for teaching and its specificity to high school geometry instruction. In J. Lo, K. R. Leatham, & L. R. Van Zoest (Eds.), Research Trends in Mathematics Teacher Education (pp. 23-45). New York, NY: Springer.

Ko, I., & Herbst, P. (2020). Subject matter knowledge of geometry needed in tasks of teaching: Relationship to prior geometry teaching experience. Forthcoming in Journal for Research in Mathematics Education, 51(5)

Monk, D. H. (1994). Subject area preparation of secondary mathematics and science teachers and student achievement. Economics of Education Review13(2), 125-145.


Author(s):

Pat Herbst
I am a professor of education and mathematics. I direct the GRIP Lab (gripumich.org) which has been convening the Get: A Pencil community.

Leave a Reply