By John Myers

 Assessment Process

In Part 1 my students (teacher candidates) identified issues in assessment based on their experiences as students and teacher candidates. Part 2 presents some of their findings. How well do you as experienced teachers recognize the challenges of improving assessment and evaluation practices we identify?  Taking nearly 1000 pages of their research and distilling it was easier than grading (already did that) as certain ideas were common to several to the questions investigated.

What in the following is familiar to you? What issues have you resolved to your satisfaction and that of the your students? Whet seems novel and worth a try?

 

Our Findings

Those who looked at EQAO testing, including some with experience as markers, wondered how much input classroom teachers had into the items, format, and marking scheme? This can apply to all standardized tests and other forms of large-scale assessment.

For many of the research topics noted in Part 1, challenges and solutions overlapped. So here are some common themes their work revealed, regardless of the research question;

  • frequent assessments with a cycle of feedback and student involvement through self or peer assessment- teaching students how to do these is vital
  • working from the outset of a major assignment to revise or create a rubric with students for student voice and student trust
  • for some of the issues we worked with such as grading there is little research at K-12 and even the university studies raised questions as they offered approaches to make grading fair
  • teachers working collaboratively in moderated marking sessions improved reliability of scoring significantly
  • issues of validity are still present, especially since we are expected to assess and evaluate knowledge, skills, conceptual understanding, critical and creative thinking, dispositions, habits of mind and learning skills (See my Pedagogical Perspective on the nature of quality in a previous Rapport blog.)
  • clarity of goals, modeling through exemplars and identifying success criteria promote achievement
  • the challenges of linking learning goals to assessment methods - the official curriculum, the taught curriculum, and the tested curriculum were seen*
  • the importance of motivation and developing trust** so students recognize the value of learning rather than just getting the grade, and
  • if we do all the “good things” noted above, are grades being inflated or do the higher scores simply represent good teaching?
  • Growing Success needs an upgrade based on what we found and what we wished we had found. The upgrade would also consider the implementation challenges that still exist nearly a decade after publication.

The challenges of differentiation were explored with the consensus (matching Tomlinson & McTighe, 2006) that if the learning goals are clear students can have options for demonstrating what they know and can do. Such differentiation challenges our workload and our design of appropriate assessment tools such as rubrics to ensure that what is evaluated matches the key learning goals and minimizes such secondary criteria as “artistic ability”.

The work on visible thinking from Harvard’s Project Zero was consistently the most popular assignment in my courses and the principles were often featured as examples of self and peer assessment practices as well as providing evidence of learning (Ritchhart et al, 2011). The following offers an introduction to this work http://www.pz.harvard.edu/resources/thinking-routines-video.  Some of what we did will be featured in my upcoming workshop at OHASSTA in November.

Grading was a major issue we explored. For example, while the provincial Achievement Charts were seen as a way to promote assessment beyond declarative knowledge; i.e. facts, they were also seen as a straightjacket when applied to assign a fixed percentage in a graded assignment. In a study we did the first year of this course 49 of the teacher candidates saw overlap and potential confusion among the categories; for example, when you “explain” something when does knowledge end and communication begin? Thinking can only be demonstrated when it is communicated in some way. Does knowledge of an historical event or naming a psychological construct mean a student understands? Only 3 candidates saw a “possibility” of clear distinctions among the categories.

Does grading enhance learning or is it a “technology” that limits student growth (Hoben, 2013)? A number of colleagues at OISE struggle with assessment because it is associated with grades. For example, I applied some research on test anxiety*** which resulted in both improved scores and an absence of panic during the end-of-course quiz (designed to test knowledge and principle acquisition). Because the students demonstrated improved learning in this measure (the primary purpose of assessment in Growing Success) I got called on my high grades.  

As Hoben states,  “grades, we should remember, are only floating markers on an ocean that is vaster and more powerful than we can ever imagine” (p. 182). Do our scale-levels, percentages, etc. truly represent levels of difference in learning. Does our grading system align with our purposes for assessing student learning (Myers, 2004)?****

 

If I Could Reshape The Course?

Like the teaching strategies course, Models of Teaching taught in the first decade of the 21st century, I would want this course taught in year two so that teacher candidates could work with their ATs in practicum to try things out and see if they stick.  We would also further explore the use of visible thinking routines in collaboration with Harvard’s Project Zero to see how teachers follow-up with what they learned from any routine.  I have at various workshops offered an example from decades ago bases on a routine called “reaction wheel”. I shall share this at the OHASSTA conference.

If you have ideas for revisions and improvements, please offer them.

* an account of the dilemmas we face when working with the “official”, “taught”, “tested”, “hidden” and “learned” can be found in Myers, J (2003). Curriculi, curricula: The many faces of curriculum in Education Today (summer).

** Among my favourite writers is Dan Pink whose “pinkcasts” are clear and brief looks at issues of motivation for people of all ages. One of these that relates to teaching and learning is found at https://www.danpink.com/pinkcast.  Check out Pinkcast 2.16 offering better feedback in just 19 words (78 seconds + comments).

*** Test anxiety causes and remedies abound if google is any judge. In recent conversations with teachers of undergrad courses at universities and community colleges one wonders how much is done at the high school level on teaching students how to write tests and exams.

**** For a recent look at grading issues see O’Connor, K., Jung, L. A. and Reeves, D. (2018). Gearing up for FAST grading and reporting. Phi Delta Kappan  (May).

 

Additional References

Hoben, J. (2013). Outside the Wounding Machine: Grading and the Motive for

Metaphor. Counterpoints, 451, 169-183.

Myers, J. (2004). Assessment and evaluation in social studies: A question of balance. A. Sears and I Wright (eds). Challenges & Prospects for Canadian Social Studies Vancouver BC: Pacific Educational Press.

Ritchhart, R, Church, M and Morrison, K. (2011). Making Thinking Visib;e: How To Promote Engagement Understanding, and Independence for All Learners. San Francisco: Jossey-Bass..

Tomlinson, C.A. and McTighe, J. (2006). Integrating Differentiated Instructions & Understanding by Design: Connecting Content and Kids. Alexandria VA: Association for Supervision and Curriculum Development.

 

John Myers teaches at OISE. He is a regular contributor to Rapport.