Over the past year, the Teaching GeT working group proposed that one way to contribute to reducing the variability in outcomes in the preparation of secondary geometry teachers would be to formulate and steward a set of ten student learning objectives (SLOs) that could be utilized by instructors of GeT courses. We recognize that the SLOs themselves are a work in progress and that at any one time we are dealing with a version of them. Precisely because of the open-text nature of the SLOs, it is important to identify the many sources of warrants that we could rely on in order to use the SLOs to build more specific curriculum or instruction, as well as improve the SLOs themselves. Important sources for the development of the SLOs have included: the mathematical domain of geometry and its history, instructors’ experiences teaching geometry courses and what they have seen their students do in those courses, policy documents for the teaching of geometry in K-12 and college, mathematics education scholarship, and instructors’ knowledge of research and practice in the teaching and learning of geometry at the secondary level. Those sources have supported lively discussions about what to include and how to prioritize possible inclusions. We at the GRIP thought that gathering students’ work on items that elicited knowledge of the SLOs could provide another kind of warrant to support discussions about the SLOs.
Based on the SLOs v.0 produced by the Teaching GeT group, members of the GRIP Lab at the University of Michigan developed a set of open-ended assessment items that tap into GeT students’ attainment of the SLOs. The intention was to have each item elicit the knowledge named in one of the SLOs, though it was apparent that item responses might also provide evidence of knowledge of other SLOs. Following the genre of other MKT assessments (e.g., Ball et al., 2008; Herbst & Kosko, 2014; Hill et al., 2004), each item describes an event happening in a high school geometry classroom —in which the teacher needed to make a decision that required the knowledge named in that SLO. For example, the following item, designed to measure SLO 1 (Proofs), asked the participant to consider the following:

Unlike in the usual MKT-G items, the respondents did not receive a set of alternatives to choose from but were asked to compose an open-ended response and enter it in a text field.
The process through which the current set of items were created was loosely based on a set of recommended guidelines in developing measurement scales specified by DeVellis (2014, p. 105-152). In particular, as the constructs (SLOs) were already defined, the majority of the work involved scoping several items for each SLO, then choosing which of those scopes to turn into actual items, write those items, and put them through rounds of revision. The vetting of initial drafts of the items included considerations of whether the teaching scenario described in a given item (the student work, the decision the teacher had to make, etc.) seemed realistic and whether the item seemed likely to elicit a response that would be mainly driven by the participant’s knowledge named in a given SLO. In the end, two items for each SLO were chosen to be administered.
These items are a first, rapid prototype of what a summative assessment might look like, created to gather data to support our collective work on the SLOs. That is, we do not yet know enough about the items to use them for consequential tasks such as appraising an individual’s attainment of a specific SLO, an individual’s attainment of the SLOs in their totality, or a class’s average attainment of the SLOs as a proxy for the quality of the attained curriculum. The items target geometry knowledge by posing problems contextualized in tasks of teaching and make minimal assumptions about respondents’ knowledge of mathematics schooling, however, they are not intended to assess knowledge of pedagogy.
While not ready to be used in any formal assessment of students or evaluation of courses, the items support the process of stewarding the SLOs by prototyping what kind of items might be needed for our whole community to document our progress in student SLO attainment. So far, we have collected student responses from seven GeT courses from the Winter 2021 term. The responses we have collected can provide an empirical basis for our community to discuss and improve the SLOs; for example, the contents students might bring up in the item responses can resonate or not with the expectations we may have had about what it would mean to attain an SLO.
In order to engage the community in that conversation, we proposed a workshop where current and prospective members of GeT: A Pencil could come and review items and students’ responses to those items. Rather than work intensively over a few days like at a traditional conference workshop, and to make the workshop easier to attend, participants were asked to commit a couple of hours per week, every second week, over the summer and early fall term. For each item, they would discuss what the item seemed to assess in light of the responses and the SLOs. Participants were given access to more responses in a Canvas forum in which they continued to discuss the items. Finally, during the week of October 4th, participants had the opportunity to discuss the assessment more holistically.
In this volume and future iterations of GeT: The News!, we will provide articles that take a deeper dive into the items themselves. In these articles, we will provide an item and its intended SLO, our analysis a priori of the item, and what we heard from the instructors regarding the items, as well as how the students responded to the items in a categorized form. As we have learned from these workshops, there is much to be gained not only from the correct responses but from the incorrect or partially correct ones as well—which we will show through these writings.
References
Ball, D. L., & Cohen, D. K. (1999). Developing practice, developing practitioners: Toward a practice-based theory of professional education. Teaching as the Learning Profession: Handbook of Policy and Practice, 1, 3–22.
DeVellis, R. (2014). Scale development: Theory and applications. Sage Publications. Thousand Oaks, CA.
Herbst, P., & Kosko, K. (2014). Mathematical Knowledge for Teaching and its Specificity to High School Geometry Instruction. In J.-J. Lo, K. R. Leatham, & L. R. Van Zoest (Eds.), Research Trends in Mathematics Teacher Education (pp. 23–45). Springer International Publishing.
Hill, H. C., Schilling, S. G., & Ball, D. L. (2004). Developing measures of teachers’ mathematics knowledge for teaching. Elem. Sch. J., 105(1), 11-30.


Leave a Reply
You must be logged in to post a comment.