In this article, I share what the GRIP Lab has learned by collecting responses from Geometry for Teachers (GeT) students who have taken our mathematical knowledge for teaching geometry (MKT-G) assessment before and after taking the GeT course.
MKT-G instrument
Herbst and Kosko (2014) developed an instrument to measure MKT-G that follows the definitions of content knowledge for teaching from Ball, Thames, and Phelps (2008). We used that instrument to estimate preservice teachers’ MKT-G using a unidimensional item response theory (IRT) model.
To understand the participating GeT students’ MKT-G growth in relation to inservice teachers’ MKT-G, GeT students’ MKT-G scores were estimated using a distribution of in-service teachers’ MKT-G scores. Specifically, GeT students’ item responses were aggregated with the responses to the same 21 stem items by 605 in-service teachers so that GeT students’ MKT-G standing relative to the in-service teachers could be examined.
Research Questions
- What is the growth in MKT-G scores that happens during a GeT course?
- How do GeT students compare in MKT-G to a national sample of inservice teachers?
- Are there differences in the growth of MKT-G scores between students who seek teaching certification and other students also taking the GeT course?
Data
This analysis considers the responses from 222 students taking 15 GeT courses taught by 13 GeT instructors in the 2018/2019 academic year. Of these 222 students, 123 (55.4%) of them were preparing for teacher certification.
Method
We estimate the growth in MKT-G scores using a linear regression model:

where β0 is the average MKT-G IRT score at the beginning of the semester and is the estimated growth in MKT-G IRT scores at the end of the semester. This regression model is equivalent to a paired t-test that compares the average MKT-G IRT scores before and after the Geometry for Teachers course. Using a regression allows us to adjust the growth estimate for students’ covariates. We adjust our estimates for students’ programs and majors as well as students’ demographic characteristics.
Results/Discussion
Three main results emerge: (1) On average, students score about 0.161 standard deviation units higher on the MKT-G test after completing the Geometry for Teachers course, after controlling for student programs and majors, and their demographic characteristics. (2) Students taking the MKT-G test score about one standard deviation below inservice teachers (with an average of 14.2 years of mathematics teaching) that took the same test, on average. (3) Students who plan to be mathematics teachers have higher gains in MKT-G than other students, on average (.234 standard deviation growth compared to .09). These results highlight some main conclusions about the Geometry for Teachers course. First, teachers develop knowledge about geometry while they teach. The difference between the students in our sample and the average inservice teacher is about the expected growth in teacher knowledge that happens after teaching geometry for five years (see Desimone, Hochberg, & McMaken, 2016). Second, taking a specialized GeT course appears to close this gap in knowledge by about one year. As we move forward with this work, we gain understandings of the importance and value of the GeT course for preservice teachers
Mike Ion is a Research Assistant in the GRIP Lab.
References
Ball, D., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389–407.
Desimone, L., Hochberg, E. D., & McMaken, J. (2016). Teacher knowledge and instructional quality of beginning teachers: Growth and linkages. Teachers College Record, 118(5), 1–54.
Herbst, P., & Kosko, K. (2014). Mathematical knowledge for teaching and its specificity to high school geometry instruction. In J.-J. Lo, K. R. Leatham, and L. R. Van Zoest (Eds.), Research trends in mathematics teacher education (pp. 23–45). Cham: Springer.

Leave a Reply
You must be logged in to post a comment.