Author: Mike Ion

  • Does our MKT-G Instrument Measure the Same Knowledge in the Same Way for GeT Students and for Practicing Teachers?

    In order to understand the impact of a teaching intervention (e.g., a course or a professional development program) on what students gained from that experience, students are often administered a test before and after that intervention, and researchers study the gains computed by subtracting the pre-test scores from the post-test. Similarly, researchers use the difference in groups’ test scores to compare performance between different groups. This way of using a difference in test scores, however, requires an assumption that test items measure the same construct (e.g, knowledge) in the same way for different points in time (e.g., pre- and post-test) or different groups of participants (e.g., pre-service and practicing teachers). 

    To ensure this assumption of measurement invariance (i.e., the same test items measure the same construct in the same way) is important when comparing amounts of, or gains on, a construct, because the observed difference in the scores could be due to different types of constructs being measured by the same items rather than the difference in the same target construct. For example, different groups of participants could interpret the same wording differently due to their demographics or educational background. Similarly, participants could react differently to the same content of an item depending on when the item is provided to the participants. As such, we cannot take for granted that the use of the same assessment items guarantees that a set of assessment items is measuring the same thing across different groups of participants or over time. To validly compare a measured construct across groups or time points, it is recommended that a test of measurement invariance be performed. In other words, it is important to demonstrate that the way in which items are related to a target construct (e.g., MKT-G) is equivalent across the compared populations and over time. The statistical technique used to test this invariance in our study is called multi-group confirmatory factor analysis (Brown, 2006). In this note, we present how we used the measurement invariance tests to estimate the gain of GeT students’ mathematical knowledge for teaching geometry (MKT-G) before and after taking GeT courses and how their post-test score is different from practicing teachers’ MKT-G.

    By using the 17 MKT-G items developed by Herbst’s research group, we examined the participating GeT students’ MKT-G growth over the duration of the course. Also, by scaling the growth using a distribution of practicing teachers’ MKT-G scores, we approximated GeT students’ growth in terms of in-service teachers’ years of experience.  An assumption in estimating a construct (here, MKT-G) by using a set of responses is that the common variance among a set of responses to items is accounted for by the construct, and the relationship between the scale of an item score and the latent construct is a linear function. The slope of the linear function, where its horizontal axis represents the level of the latent construct and the vertical axis represents item score, is the item factor loading representing the magnitude of the relationship between the item and MKT-G. The intercept of the linear function is a predicted value of the item score when the level of MKT-G is zero. Thus, the equivalence in the way in which items are related to a targeted construct between the groups can be examined by testing the equality in the structure of the construct (configural invariance), factor loadings (metric invariance), and the item intercepts (scalar invariance). We tested the equivalence of item parameters simultaneously, not only between GeT students and practicing teachers but also between GeT students’ pre-test and their post-test. 

    The results derived from subsequent invariance tests suggested that the relationship of the items to the measured knowledge was at least partially equivalent between GeT students’ pre- and post-test, as well as between GeT students’ and practicing teachers. Here, partial equivalence means that we were able to establish the equivalence between the groups and time points after allowing unequal item parameters (item factor loadings or item intercepts) for 9 among 17 items. As we were able to establish comparable scales, we proceeded to calculate the GeT students’ MKT-G growth and compare the growth to the practicing teachers’ MKT-G.

    The comparison of the scores suggested that, on average, GeT students scored about 0.25 SD units higher on the MKT-G test after completing the GeT course, but it was still 1.04 SD units below practicing teachers’ MKT-G who took the same test. This result implies the positive association between the college geometry courses designed for future teachers and mathematical knowledge for teaching geometry in terms of the growth in the knowledge of the students who took the courses. Additionally, examining the association contributes to research methodology by showing how to establish comparable scales of knowledge gains between two different teacher populations (e.g., pre-service teachers and in-service teachers).

    Reference

    Brown, T. A. (2006). Confirmatory factor analysis for applied research. Guilford.

  • A Contribution to Stewarding the SLOs: Developing SLO Assessment Items and Examining Item Responses

    Over the past year, the Teaching GeT working group proposed that one way to contribute to reducing the variability in outcomes in the preparation of secondary geometry teachers would be to formulate and steward a set of ten student learning objectives (SLOs) that could be utilized by instructors of GeT courses. We recognize that the SLOs themselves are a work in progress and that at any one time we are dealing with a version of them. Precisely because of the open-text nature of the SLOs, it is important to identify the many sources of warrants that we could rely on in order to use the SLOs to build more specific curriculum or instruction, as well as improve the SLOs themselves. Important sources for the development of the SLOs have included: the mathematical domain of geometry and its history, instructors’ experiences teaching geometry courses and what they have seen their students do in those courses, policy documents for the teaching of geometry in K-12 and college, mathematics education scholarship, and instructors’ knowledge of research and practice in the teaching and learning of geometry at the secondary level. Those sources have supported lively discussions about what to include and how to prioritize possible inclusions. We at the GRIP thought that gathering students’ work on items that elicited knowledge of the SLOs could provide another kind of warrant to support discussions about the SLOs.

    Based on the SLOs v.0 produced by the Teaching GeT group, members of the GRIP Lab at the University of Michigan developed a set of open-ended assessment items that tap into GeT students’ attainment of the SLOs. The intention was to have each item elicit the knowledge named in one of the SLOs, though it was apparent that item responses might also provide evidence of knowledge of other SLOs. Following the genre of other MKT assessments (e.g., Ball et al., 2008; Herbst & Kosko, 2014; Hill et al., 2004), each item describes an event happening in a high school geometry classroom —in which the teacher needed to make a decision that required the knowledge named in that SLO. For example, the following item, designed to measure SLO 1 (Proofs), asked the participant to consider the following:

    Unlike in the usual MKT-G items, the respondents did not receive a set of alternatives to choose from but were asked to compose an open-ended response and enter it in a text field. 

    The process through which the current set of items were created was loosely based on a set of recommended guidelines in developing measurement scales specified by DeVellis (2014, p. 105-152). In particular, as the constructs (SLOs) were already defined, the majority of the work involved scoping several items for each SLO, then choosing which of those scopes to turn into actual items, write those items, and put them through rounds of revision. The vetting of initial drafts of the items included considerations of whether the teaching scenario described in a given item (the student work, the decision the teacher had to make, etc.) seemed realistic and whether the item seemed likely to elicit a response that would be mainly driven by the participant’s knowledge named in a given SLO. In the end, two items for each SLO were chosen to be administered.

    These items are a first, rapid prototype of what a summative assessment might look like, created to gather data to support our collective work on the SLOs. That is, we do not yet know enough about the items to use them for consequential tasks such as appraising an individual’s attainment of a specific SLO, an individual’s attainment of the SLOs in their totality, or a class’s average attainment of the SLOs as a proxy for the quality of the attained curriculum. The items target geometry knowledge by posing problems contextualized in tasks of teaching and make minimal assumptions about respondents’ knowledge of mathematics schooling, however,  they are not intended to assess knowledge of pedagogy.

    While not ready to be used in any formal assessment of students or evaluation of courses, the items support the process of stewarding the SLOs by prototyping what kind of items might be needed for our whole community to document our progress in student SLO attainment. So far, we have collected student responses from seven GeT courses from the Winter 2021 term. The responses we have collected can provide an empirical basis for our community to discuss and improve the SLOs; for example, the contents students might bring up in the item responses can resonate or not with the expectations we may have had about what it would mean to attain an SLO.

    In order to engage the community in that conversation, we proposed a workshop where current and prospective members of GeT: A Pencil could come and review items and students’ responses to those items. Rather than work intensively over a few days like at a traditional conference workshop, and to make the workshop easier to attend, participants were asked to commit a couple of hours per week, every second week, over the summer and early fall term. For each item, they would discuss what the item seemed to assess in light of the responses and the SLOs. Participants were given access to more responses in a Canvas forum in which they continued to discuss the items. Finally, during the week of October 4th, participants had the opportunity to discuss the assessment more holistically. 

    In this volume and future iterations of GeT: The News!, we will provide articles that take a deeper dive into the items themselves. In these articles, we will provide an item and its intended SLO, our analysis a priori of the item, and what we heard from the instructors regarding the items, as well as how the students responded to the items in a categorized form. As we have learned from these workshops, there is much to be gained not only from the correct responses but from the incorrect or partially correct ones as well—which we will show through these writings.

    References

    Ball, D. L., & Cohen, D. K. (1999). Developing practice, developing practitioners: Toward a practice-based theory of professional education. Teaching as the Learning Profession: Handbook of Policy and Practice1, 3–22.

    DeVellis, R. (2014). Scale development: Theory and applications. Sage Publications. Thousand Oaks, CA.

    Herbst, P., & Kosko, K. (2014). Mathematical Knowledge for Teaching and its Specificity to High School Geometry Instruction. In J.-J. Lo, K. R. Leatham, & L. R. Van Zoest (Eds.), Research Trends in Mathematics Teacher Education (pp. 23–45). Springer International Publishing.

    Hill, H. C., Schilling, S. G., & Ball, D. L. (2004). Developing measures of teachers’ mathematics knowledge for teaching. Elem. Sch. J., 105(1), 11-30.

  • A Deeper Dive into an SLO Item: Examining Students’ Ways of Reasoning about Relationships between Euclidean and Non-Euclidean Geometries

    The assessment items we developed last spring can be a resource in our collective work stewarding the Student Learning Objectives (SLOs). One way to operationalize that possibility could be to revise the items with the goal of incorporating them into a test, like the MKT-G. In such a scenario, we could envision that at some point the items could serve to provide instructors with information about how well their students attained each of the SLOs or all of them as a set. However, there is another possible use, which seems to us more compelling for the time being, and it hinges on a different interpretation of assessment: assessments that have formative, rather than summative, purposes. 

    Assessments like the MKT-G are often described as summative assessments; the information they provide feeds a single, synthetic score per student, that can be interpreted in terms of how much knowledge they gained over the course of the semester. As Schoenfeld (2015) says, 

    Summative assessments are examinations or performance opportunities the primary purpose of which is to assign students a score on the basis of their knowledge (p. 184).

    In that sense, gains in the MKT-G test provide information comparable to course grades. However, educators also use assessment items for formative purposes. Schoenfeld (2015) writes that,

    Formative assessments are examinations or performance opportunities the primary purpose of which is to provide student and teachers feedback about the student’s current state, while there are still opportunities for student improvement (p. 184).

    We want to examine the possibility of using the SLO Assessment items for formative purposes, namely, to provide evidence to instructors and students about students’ understanding, with the goal of informing decisions the instructor might make during a class, such as what to do in the following class meeting. 

    Toward that end, the materials we have consulted and the discussions we have had during the summer of 2021 in the context of our assessment workshop can be quite valuable. They can help us consider what resources instructors would need in order to implement these items for formative assessment purposes. We initiate one consideration taking as an example item 15903. This item, which we transcribe below, was originally written to give students an opportunity to show evidence of attainment of SLO 9. 


    The student learning objective (SLO 9) about other geometries states:

    SLO 9 Compare Euclidean geometry to other geometries such as hyperbolic or spherical geometry.

    We took the examples provided in SLO 9 as suggesting a distinction between Euclidean and non-Euclidean, possibly appealing to the historical efforts to prove the parallel postulate. We assumed that if a geometry course would aim for students’ attainment of SLO 9, the class would likely have a discussion of the parallel postulate, its negation, and, possibly, Euclidean models of the different geometries that would ensue. 

    With that in mind, item 15903 reads:

    Consistent with the genre of MKT items, the SLO items were all phrased in terms of a high school teachers’ work, so as to evoke the notion that mathematical knowledge for teaching is the knowledge a teacher needs to do their job. In this case, the notion that students could ask a question that involved comparison between geometries and that the teacher would need to answer such a question was meant to simulate for the prospective teachers taking the test the work that they would need to do later on if and when they became teachers. The text of the item, particularly “some students started wondering whether everything that is true in Euclidean geometry will turn out to be false in non-Euclidean geometries,” suggests that some contrasting truths might have been presented to the class. For example, the notion that, in Euclidean geometry, there is one and only one line parallel to a given line through a point not on that line might have been contrasted with how, in spherical geometry, there might not be any parallel line, or in hyperbolic geometry, there might be more than one such parallel line. Furthermore, for students to even think about the possibility that the different geometries would contradict each other everywhere, we thought it was likely that Mr. Thompson had discussed some contradictory facts across geometries. One fact that is often presented in history of mathematics textbooks is what the different geometries state for the sum of the angles in a triangle. In Euclidean geometry the triangle sum is a constant (180 degrees); in hyperbolic geometry it is not a constant, but it is always less than 180 degrees; and in elliptic geometry it is not a constant either, but is always more than 180 degrees. The context of a high school class afforded bringing in the thinking of adolescents and their penchant for playing with logical inferences by taking things to extremes. In writing the item, we thought it quite possible that after seeing two statements that said contradictory things about the same objects across different geometries, Mr. Thompson’s students might consider it reasonable to pose the question they posed to him. 

    Analysis a priori of item 15903

    One immediate thing to notice in item 15903 is that the question the students asked Mr. Thompson implies a false generalization: even though some things which are true in Euclidean geometry are not true in non-Euclidean geometry, it is not the case that everything which is true in Euclidean geometry is false in other geometries. In order to show that generalization as false, Mr. Thompson would need a counterexample; a counterexample would be a statement which is true in Euclidean and non-Euclidean geometries. We thought item 15903 would address SLO 9 because if students had exposure to the difference between Euclidean and non-Euclidean geometries, it would show them the role the parallel postulate played in the emergence of non-Euclidean geometries. Students who had the opportunity to learn about non-Euclidean geometries might get to think of properties that rely neither on the parallel postulate nor on its alternatives to warrant their truth. The SAS criterion for triangle congruence is true in absolute geometry (geometry that satisfies Hilbert postulates except for the parallel postulate). Also, propositions that assert properties of incidence, separation, and betweenness would work across geometries. 

    When we shared the item with members of GeT: A Pencil, here is what we heard:

    • One participant noted: “In the prompt, the student is wondering if everything that is true in Euclidean geometry will turn out to be false in non-Euclidean geometry, but the question is asking for an example of a response the teacher could give that would offer a theorem that is true in Euclidean geometry and is also true in non-Euclidean geometry.” The participant went on to note whether this will have an effect on the student responses, as the student is being asked to falsify a claim by giving an example of something that is true.
    • A second participant noted that students will need to understand what makes Euclidean geometry Euclidean and be familiar with examples of non-Euclidean geometries.
    • Another participant noted that students would need to have a working knowledge of at least a few examples of non-Euclidean geometry, including examples of at least one theorem that is true in both Euclidean and non-Euclidean geometries. 
    • An additional participant noted that students should know what is meant by a “theorem.” “Specifically, they would have to give some thought to the following question: “is an axiom of a theory also one of its theorems?” (From the perspective of formal logic, the answer to this question is ‘Yes’, but from the perspective of standard usage the answer is ‘No’.)”
    • During our in-person discussion, this conversation continued about whether in the GeT course there is a need to distinguish between axioms and theorems. For example, is an axiom a theorem? Some GeT instructors shared that they make this explicit for their students, while others noted that they do not. Some instructors are concerned that their students would feel like they would not be allowed to state an axiom.
    • Another point of concern was whether or not the item needed to ask for a ‘theorem’ that is true in Euclidean and non-Euclidean geometry. One participant suggested that instead of asking “What example could Mr. Thompson offer of a theorem that is true in Euclidean geometry and in a non-Euclidean geometry” we could instead ask, “What example could Mr. Thompson offer of something that is true in Euclidean geometry and in a non-Euclidean geometry?”. This participant claims that this would target the same knowledge of the SLO and whether non-Euclidean means the negation of Euclidean.

    The first two bullets seem to suggest the item has face validity as part of an assessment of whether students have attained SLO 9. The third, fourth, and fifth bullets, however, suggest sources of noise in the item. The notion of a theorem, for example, brings some noise. On the one hand, the third bullet contrasts theorems with axioms, noting that logically they are similar—indeed, both axioms and theorems are declarative statements, but they are different in usage. For the question, it is important that students would think of propositions that are proved within a theory making use of axioms (i.e., theorems) and not propositions that are postulated as true to build the theory (i.e., axioms). Otherwise, not only would the question be too easy (e.g., students could bring up the axiom that two points determine a line as an example that some statements are true across Euclidean and non-Euclidean geometries), but it would also fail to tap into the interpretation of the sense of logical necessity flowing from axioms to theorems called up by the question (i.e., a set of axioms defines a geometry by necessitating the truth of a set of theorems, but this does not mean that all the axioms are needed to prove all the theorems, and so if two theories have an overlapping set of axioms, as Euclidean and non-Euclidean geometries do, it is quite possible that some theorems would be true in both). To address the issue raised in the penultimate bullet, the question meant to signal, indeed, that providing an axiom would not be an answer to the question–as what students had said to Mr. Johnson, that such things would “turn out to be false,” pointed to the truth value of the proposition at stake being something dependent on something else rather than postulated by choice. On the other hand, the meaning of the word theorem is not merely that of a declarative proposition that has been proved. Theorems are special propositions, deserving of recognition; for example,they conclude an investigation or present a frequently used result. Along those lines, there are status differences among declarative propositions that can be proved—theorems, lemmas, propositions, observations, and corollaries may be logically created equal but are not mathematically equal. Indeed, some of them are provable but not proved because their proof is trivial. Even Euclid distinguished the statements he proved among theorems and scholia. However, theorems do not only have special status; they also often have names or shorthands that refer to them. This is especially on point here because many theorems of Euclidean geometry do not have names or shorthands that will make them memorable; students recognize the Pythagorean theorem and maybe the base angles and exterior angle theorems, but many of the theorems that would answer the question may not have enough of those accoutrements to be remembered as theorems. Furthermore, the properties of incidence, collinearity, and separation which could answer the question were not historically theorems for Euclid but rather assumptions that later geometers made explicit. Pasch’s Theorem, for example, says that if three points are not on a line and a line passes through the segment determined by two of them, the line will also pass through one of the two other segments determined by the three points. Within Hilbert’s axioms for Euclidean geometry, Pasch’s Theorem is a theorem which is true across geometries; yet when Pasch proposed it, he did so as a way to show the gaps in Euclid’s axioms—as the Theorem cannot really be proven from Euclid’s original axioms. Could students have brought up Pasch’s theorem as an example that Mr. Thompson could use? Maybe, but unlikely. The example of Pasch’s Theorem suggests that beyond the status differential among declarative propositions, there are historical developments used in distinguishing Euclidean and non-Euclidean geometry that could get in the way of students identifying a theorem that would be true across Euclidean and non-Euclidean geometry. Indeed, the status and historical confounds of the word theorem complicated the question too much; students who knew different geometries but did not have an example handy might be confused as to what would count as an example. It would be likely for students to answer that question correctly in a test if an example had been covered in their class but less likely if they had to come up with an example on their own.

    The last bullet is particularly interesting in regard to the kind of assessment one is doing. The word “something” could be problematic in the context of a test or a written, summative assessment: Students might not necessarily think of declarative statements as the “somethings” to look for. The word something would also sound so informal and opaque that students who provided non answers (e.g., “sphere in both”) might have a point to complain about in the summative assessment setting. As a result, evaluating responses might be tricky. However, in the context of a formative assessment, done in class, there might be an opportunity for the instructor to start by asking for “something” and cue students to think of declarative statements that can be proved across different geometries as the target.

    Analysis of student responses to the items

    When we collected responses during Spring 2021, we found that, of the 42 student responses, 31 responses were non-trivial—that is, responses that provide some evidence of effort or knowledge of how to solve the problem. The student responses show a variety of ways in which students might relate to the question and to the distinction between Euclidean geometry and other geometries. After looking through the responses, we classify them in the following way:

    Category 1: little to no evidence that the student was exposed to the knowledge of SLO 9 and some evidence that the student was swayed by an interpretation of the word “theorem,” which was more specific than just a provable declarative statement. Five students responded to this question by naming the Pythagorean Theorem (a “common” theorem). 

    In these responses, most of the responses simply wrote “the pythagorean theorem” or something very similar. One response (A6) noted, “the Pythagorean Theorem and resultant distant [sic] formula hold in both Euclidean and non-Euclidean geometry.” We are unsure whether the students had any exposure to the knowledge called forth by SLO 9 from these responses or if they were just naming a theorem that they knew; however, they did read the question clearly and provided a theorem, something that was not necessarily the case in other responses.

    Category 2: some evidence that the student was exposed to the knowledge called forth by SLO 9. Three students showed knowledge of triangle angle sum properties in different geometries which we take as providing some evidence that the student was exposed to the knowledge called forth by SLO 9.

    In these responses, there were references to the angle sum properties of triangles. The properties of how triangle sums differ across various non-Euclidean geometries is one of the basic facts covered in learning about those geometries. However, the knowledge the students recalled was generally incorrect. For example, responses A7 and A11 said something similar to “triangles sum to 180,” which is not true in the historical examples of non-Euclidean geometries. However, students’ responses were not limited to those geometries. Statement A51 (“A triangle’s angles still add up to 180 degrees in taxicab geometry.”) provided facts about triangles in taxicab geometries; this student attempted to give an example of a fact that is true in both Euclidean and a non-Euclidean geometry, but this fact is not true in the formulation of taxicab geometry where the sum of the angles is 4t-radians.

    Category 3: little to some evidence that the student was exposed to the knowledge of SLO 9. Students named objects or properties of mathematical objects without any mention of explicit geometries.

    Twelve responses were classified in this category. In these responses, students either named a mathematical object or properties about mathematical object(s) without explicitly naming the non-Euclidean geometry. Some examples of responses that name mathematical objects are A12 (“parallel lines”), A16 (“a straight line”), A19 (“sphere in both”), A26 (“hyperbolic shapes”), and A36 (“Parallel lines exist?…”). Some theorems can be proven about these objects, so it is possible that the students were remembering isolated bits of the knowledge associated with SLO 9, but the student did not provide a theorem nor a non-Euclidean geometry. One response, A35, states, “the fifth postulate is still true,” which is incorrect, as the fifth postulate is only true in Euclidean geometry. The rest of the responses A17 (“Theorem 1.2 that states two lines have at most one point in common.”), A23 (“A straight line segment can be drawn joining any two points.”), A28 (“the area”), A29 (“The angle between perpendicular lines remains 90 degrees in non-euclidean geometry.”), A39 (“the definition of a circle”), and A55 (“Def of line”) deal with properties or definitions of mathematical objects, yet do not name a non-Euclidean geometry.

    Category 4: ample evidence that the student was exposed to the knowledge associated with SLO 9. These were (mostly) correct responses.

    Ten responses were classified in this category. They include students who correctly provided a theorem that holds in both Euclidean geometry and a non-Euclidean geometry. Additionally, these students were explicit about which non-Euclidean geometry the theorem holds in. These responses include references to the intersections of lines in Euclidean and hyperbolic geometries (A22, A25, A32, A38). There were two responses that looked at properties of parallel lines across Euclidean and an explicit non-Euclidean geometry (A24, A49). Statements A15 (“The SAS theorem is used in both types of geometry.”) and A20 (“Proving congruency can work in both.”) about triangle congruence are on the right track. However, A20 could have been more specific about a theorem and a particular non-Euclidean geometry, and A15 needed to be clear which non-Euclidean geometry SAS theorem holds in (e.g., SAS congruence is true in hyperbolic and spherical geometry but not in taxicab geometry). Statements A13 (“area of a rectangle in taxicab geometry”) and A57 (“taxicab equilateral triangles are not always equiangular.”) are examples of correct statements that need to be rewritten to answer the question correctly. Lastly, one response was almost correct (A56), in which a student noted, “I would explore any theorem in hyperbolic geometry that doesn’t require the parallel axiom,” but the student was not explicit about which theorem to choose.

    Instructor interpretations of the student responses

    When members of GeT: A Pencil saw the student responses, we heard the following reactions/interpretations in the forum.

    • Forum participants thought it was remarkable that a large number of items (11) named postulates or definitions.
    • A24 (“In Euclidean and Hyperbolic, two lines perpendicular to the same lines are parallel.”) and A51 (“A triangle’s angles still add up to 180 degrees in taxicab geometry.”) name common theorems in hyperbolic and taxicab geometries, respectively.
    • All participants agreed that A24 was the strongest response: “in Euclidean and hyperbolic, two lines perpendicular to the same lines are parallel.”
    • One participant noted that A56 (“I would explore any theorem in hyperbolic geometry that doesn’t require the parallel axiom.”) is correct, while not providing a concrete theorem.
    • One participant noted that A17 (“Theorem 1.2 that states two lines have at most one point in common.”), A22 (“An example Mr. Thompson can give is that in both Euclidean Geometry and hyperbolic Geometry the intersection of two lines is at most one point, A32 (“In both Euclidean and hyperbolic geometry there is at most one point at the intersection of two lines.”), and A38 (“There is at most one point for the intersection of two lines in Euclidean and hyperbolic geometry.”) make the same claim, which is true in neutral geometry and is definitely a theorem (in the sense of “not an axiom”). A25 (“Lines that intersect in Euclidean geometry intersect [at] exactly one point. This can also occur in hyperbolic geometry.”) makes the same claim but is marred by the phrase “can also occur;” it should be “is also true.”
    • Instructors noticed students naming the Pythagorean Theorem, but noted that this is not a theorem that is true in neutral geometry. 

    Way Forward

    As we move forward with this work, we want to hear your feedback and thoughts on what we have written here. We have heard from the participants in the workshop that these items or modifications of these items could serve as formative assessment tasks in the GeT courses. The GRIP team can serve as support, providing resources for the teaching of lessons using these tasks. Additionally, GeT instructors could work collaboratively, providing their students with the same tasks, and then come together to reflect and learn from each other on how the tasks helped elicit knowledge of the SLOs from their students.

    We think that the analysis a priori as well as the categories of student responses may be helpful for instructors to use these items with formative purposes. We would love to know what you think about that and whether we could provide other things to support instructors as they use the items with their students. Additionally, if instructors who use items like 15903 collected their students’ work, it could help our community continue learning about possible student responses through scans of de-identified student work uploaded to our Canvas site. 

    References

    Schoenfeld, A. H. (2015). Summative and formative assessments in mathematics supporting the goals of the common core standards. Theory Into Practice54(3), 183-194.

  • Reporting on the MKT-G Results from GeT Students

    In this article, I share what the GRIP Lab has learned by collecting responses from Geometry for Teachers (GeT) students who have taken our mathematical knowledge for teaching geometry (MKT-G) assessment before and after taking the GeT course.

    MKT-G instrument

    Herbst and Kosko (2014) developed an instrument to measure MKT-G that follows the definitions of content knowledge for teaching from Ball, Thames, and Phelps (2008). We used that instrument to estimate preservice teachers’ MKT-G using a unidimensional item response theory (IRT) model.

    To understand the participating GeT students’ MKT-G growth in relation to inservice teachers’ MKT-G, GeT students’ MKT-G scores were estimated using a distribution of in-service teachers’ MKT-G scores. Specifically, GeT students’ item responses were aggregated with the responses to the same 21 stem items by 605 in-service teachers so that GeT students’ MKT-G standing relative to the in-service teachers could be examined.

    Research Questions

    1. What is the growth in MKT-G scores that happens during a GeT course?
    2. How do GeT students compare in MKT-G to a national sample of inservice teachers?
    3. Are there differences in the growth of MKT-G scores between students who seek teaching certification and other students also taking the GeT course?

    Data

    This analysis considers the responses from 222 students taking 15 GeT courses taught by 13 GeT instructors in the 2018/2019 academic year. Of these 222 students, 123 (55.4%) of them were preparing for teacher certification.

    Method

    We estimate the growth in MKT-G scores using a linear regression model:

    where β0 is the average MKT-G IRT score at the beginning of the semester and is the estimated growth in MKT-G IRT scores at the end of the semester. This regression model is equivalent to a paired t-test that compares the average MKT-G IRT scores before and after the Geometry for Teachers course. Using a regression allows us to adjust the growth estimate for students’ covariates. We adjust our estimates for students’ programs and majors as well as students’ demographic characteristics.

    Results/Discussion

    Three main results emerge: (1) On average, students score about 0.161 standard deviation units higher on the MKT-G test after completing the Geometry for Teachers course, after controlling for student programs and majors, and their demographic characteristics. (2) Students taking the MKT-G test score about one standard deviation below inservice teachers (with an average of 14.2 years of mathematics teaching) that took the same test, on average. (3) Students who plan to be mathematics teachers have higher gains in MKT-G than other students, on average (.234 standard deviation growth compared to .09). These results highlight some main conclusions about the Geometry for Teachers course. First, teachers develop knowledge about geometry while they teach. The difference between the students in our sample and the average inservice teacher is about the expected growth in teacher knowledge that happens after teaching geometry for five years (see Desimone, Hochberg, & McMaken, 2016). Second, taking a specialized GeT course appears to close this gap in knowledge by about one year. As we move forward with this work, we gain understandings of the importance and value of the GeT course for preservice teachers

    Mike Ion is a Research Assistant in the GRIP Lab.

    References

    Ball, D., Thames, M. H., & Phelps, G. (2008). Content knowledge for teaching: What makes it special? Journal of Teacher Education, 59(5), 389–407.
    Desimone, L., Hochberg, E. D., & McMaken, J. (2016). Teacher knowledge and instructional quality of beginning teachers: Growth and linkages. Teachers College Record, 118(5), 1–54.
    Herbst, P., & Kosko, K. (2014). Mathematical knowledge for teaching and its specificity to high school geometry instruction. In J.-J. Lo, K. R. Leatham, and L. R. Van Zoest (Eds.), Research trends in mathematics teacher education (pp. 23–45). Cham: Springer.