“Seven Strategies of Assessment for Learning” in Action – Part 2

Learning Targets

By Ish Stabosz
Center for Creative Instruction & Technology
Delaware Technical Community College
Stanton Campus

Welcome back to my series on the book Seven Strategies of Assessment for Learning by Jan Chappuis. Last week we looked at Chapter 1, which makes a strong argument for the case that formative assessment is directly tied to student learning. This week, we’ll delve into Chapter 2, which focuses on practical implementation of the first two of the seven strategies of assessment:

  1. Provide a clear and understandable vision of the learning target.
  2. Use examples and models of strong and weak work.

Collectively, these two strategies help students first answer the question “Where am I going?” so that they are prepared to find a way to get there.

To start this chapter, Chappuis makes it clear that these two strategies come first for a reason. The primary focus of strategies 1 and 2 is to put the learning objectives front-and-center for the students so that they know the purpose of everything else that happens in the classroom. As Chappuis puts it, these strategies “help students understand that the assignment is the means and the learning is the end“.  Once students have an idea of what good learning looks like (for a particular objective), they are ready to begin implementing strategies 3 and 4, which help them answer the question “Where am I now?”

Before Chappuis gets into the meat of the strategies, though, she spends considerable time helping readers understand how to write a clear learning target, which she makes clear is a prerequisite for implementing strategies 1 and 2.

Learning target is just another term for learning objective, or as we call them at Delaware Tech, measurable performance objectives (or MPOs). Learning targets can be broad curriculum-level objectives (Write an essay that defends an original thesis); or they can be narrow lesson-level objectives (Use sufficient exploratory writing strategies to develop a tentative thesis for a reflective essay); or they can fall somewhere in-between.

In defining what a good learning target looks like, Chappuis classifies them into four types, and then talks about the forms of assessment that are useful for assessing each type of target. I’ve summarized her discussion in the chart below.

Type of Learning Target Examples Best Assessments
Knowledge-level Recalling facts, detailing procedures, and explaining concepts
  • Selected response
  • Written response
  • Personal communication
Reasoning-level Performing specific thought processes, such as hypothesize, critique, and evaluate.
  • Written response
  • Personal communication
Skill-level Executing specific physical performances, such as graphing on a calculator, pronouncing a word correctly, or touching your toes without bending your knees.
  • Performance assessment
  • Personal communication (for language-based skills)
Product-level Creating a quality product, such as an essay, presentation, design plan, etc.
  • Performance assessment

Selected response assessments include multiple-choice, true/false, fill-in-the-blank, and similar tests that generally involve a single correct answer.

Written response assessments are short answer items, usually not extending beyond several sentences in length.

Performance assessments require students to demonstrate mastery or create a product. They are complex and typically evaluated using a detailed rubric. Examples of performance assessments include essays, presentations, and practicums.

Personal communication assessments can be formal or informal conversations, such as class discussions, interviews, and oral exams.

To wrap my head around all of this information, I tried fitting some of the objectives for my own course (ENG 121: Composition) into Chappuis’s framework. Take for example, MPO 1.5 from the college-wide syllabus: “Develop and support an original thesis“. To start, I broke this course-level learning target into several smaller lesson-level targets (This is the short list; I could have gone on and on):

  • Employ exploratory writing strategies to generate ideas.
  • Draft a tentative thesis statement to focus your ideas.
  • Outline an essay unified around a thesis.
  • Draft body paragraphs to support a thesis.
  • Synthesize body paragraphs into a unified, coherent essay.

Next, I classified each of these targets according to Chappuis’s four types, and then determined what sort of assessment would best fit the target. My results are indicated in the following chart:

Learning Target Type Assessment
Employ exploratory writing strategies Reasoning-level Written response: Use freewriting, idea mapping, etc. to get your thoughts on paper.
Draft a tentative thesis Reasoning-level Written response: Analyze your exploratory writing for ideas and then write a thesis statement.
Outline an essay Reasoning-level Performance assessment: Break your thesis down into its component parts and create an outline that organizes how you will elaborate on each of these parts.
Draft body paragraphs Product-level Performance assessment: Write a series of paragraphs, each one developing a separate part of your outline.
Synthesize paragraphs into an essay Product-level Performance assessment: Use introductory and concluding paragraphs, as well as appropriate transitions to join your body paragraphs together into an essay.

When I evaluate my assessment selections according to Chappuis’s framework, I notice a bit of a disconnect: For the third target, “outline an essay”, I have chosen a performance assessment for a reasoning-level target. Originally, I thought that this wouldn’t work with Chappuis’s method, but upon consulting the chapter again, I see that she says that performance assessments can be used to assess knowledge- and reasoning-level targets with caution. My gut tells me that this assessment is a good fit, so I decide to stick with it.

Strategy 1: Provide a Clear and Understandable Vision of the Learning Target

With learning targets appropriately defined and classified, Chappuis gets into the first strategy, which is essentially about sharing your learning objectives with students so that they will understand and internalize them. Chappuis repeats throughout this chapter (and other parts of the book), “Targets on the wall are not targets in the head”. We could say the same about targets in the syllabus or on Blackboard or on the rubric. The point is that just because we write learning objectives down somewhere doesn’t mean that we should take for granted that students know what their goal is. Strategy 1 is all about getting them to understand.

Chappuis details three basic ways to share learning targets with students, explaining that each method is suitable for different types of targets.

Method 1: Sharing Targets As Is

For knowledge-level targets and straightforward skill-level targets, it’s usually sufficient to just tell the students what the target is. For example, “Today we are going to learn what the term angle of vision means”, or “In today’s class, you’ll practice drawing circles with a device called a compass.”  These targets are clear enough that students don’t really need further clarification. Even if they don’t know what “angle of vision” is, that’s precisely the focus of the learning target.

Method 2: Converting Targets to Student-Friendly Language

Reasoning-level targets typically involve cognitive processes identified by jargon that might mean a lot to educators, but nothing to students. Take, for example, one of the learning targets that I shared previously: “Outline an essay unified around a thesis” The words in bold are jargon–they mean something specific to writing instructors, but students might have mixed understanding about what they mean.

Chappuis details a 6 step process to translate such targets for students, but I’ll sum the process up like this: Identify the difficult terms and then rephrase the learning target as a “We are learning” statement. Using this method, I could translate the aforementioned learning target as follows: “We are learning to make a plan for an essay that will help us prove our point without straying off topic.”

Method 3: Using Rubrics to Communicate Learning Targets

When we will be using a written or performance assessment to assess reasoning-, skill-, or product-level learning targets, the best way to communicate the complexity of the target with students is by designing a detailed rubric. Chappuis explains that a good rubric should help students see what quality work looks like.

She warns against rubrics that use quantitative language (“Each paragraph has 4 supporting details”, “3 supports”, “2 supports”), and rubrics that only use evaluative language (“excellent details”, “good details”, “poor details”). Instead, she argues that good rubrics should use descriptive language to communicate the specific elements of quality at varying degrees, as in the example below:

Excellent Satisfactory Unsatisfactory
Details develop the topic sentence through consistent use of “showing” details. Details develop the topic sentence, but the details are often more “telling” than “showing”. Details do not develop the topic sentence in any significant way.

Rubrics with descriptive language like this help students internalize the definition of quality work. Chappuis further stresses that rubrics should be as generalizable as possible rather than specific to the task. Task-specific rubrics teach students how to complete an assignment for a class, whereas generalized rubrics teach students how to master a learning target that can be transferred to other situations.

Gears

Introducing Learning Targets to Students

It isn’t enough just to make the learning targets understandable to students. Even well-worded targets on the rubric aren’t targets in the head. A big part of implementing strategy 1, according to Chappuis, is getting students to engage with the targets until they can demonstrate understanding. Chappuis goes into a lot of detail about different ways to do this, so I’ll sum up a few of the ones that struck me:

  • Provide students with the learning target as is, have them attempt to define it in pairs, and then facilitate a class-wide discussion to dissect it into student-friendly terms.
  • Give students a chart of the learning targets at the start of the unit. Have students frequently note on the chart their progress, strengths, and challenges with each target.
  • Before giving students a rubric, have them brainstorm what good ________ looks like (in the blank, place the learning target or product). Then show students your rubric, and have them identify the criteria on your rubric that matches what they came up with.

Putting Strategy 1 into Action

Of course, all the information in this book would be meaningless if it didn’t actually change the way I do things in my classes, so I want to show you how I’ve implemented the first strategy.  Since I teach writing, all of our big assignments are assessed with rubrics. Rubrics are the primary way I communicate my learning targets to students.

To start my work with strategy 1, I decided to re-evaluate my rubric for the research paper to see if it is communicating using the clear descriptive language that Chappuis values. For brevity’s sake, I’ll just look at one piece of the rubric, which is for the learning target Synthesize research effectively and appropriately to prove your thesis. Here’s my old version:

A: Exemplary B: Accomplished
  • Thesis is well-developed and insightful, demonstrating significant wallowing in the research to look at the topic from a sophisticated perspective.
  • Research that is both broad and deep sheds new light on the topic. Use of source material is innovative, varied, and engaging.
  • Thesis is well-developed, demonstrating meaningful wallowing in the research in order gain a realistic perspective of the topic.
  • Concrete, showing evidence from research advances the main idea. Source material is well-chosen and always serves a purpose.
C: Satisfactory F: Unsatisfactory
  • Thesis is developed, though the details could probe deeper. The research is often surface-level, or it demonstrates only a limited perspective on the topic.
  • Research advances the main idea but is not particularly showing. Source material generally fits where it is placed.
  • Thesis is superficial or undeveloped, demonstrating biased or unfocused research.
  • The research is irrelevant or lacking in substance. Source material is poorly chosen or not related to the point.

I originally framed this as a four level rubric to correspond with Delaware Tech’s four letter grades. However, Chappuis points out that most rubrics work best when there are five levels of quality, and only three of them (Levels 1, 3, and 5) are defined. A student scores one of the remaining levels (2 and 4) when the quality of their work demonstrates a mix of the adjacent descriptors. For example, if the student shows some proficiency at level 5 and some at level 3, then they score a 4. So, the first revision to my rubric is to limit my defined criteria to A-, C-, and F-level work (I also decided to switch to numbers instead of letters in hopes of having students focus on the learning rather than the grade, as we learned in Chapter 1).

The next thing that I notice is that some of the descriptors are not parallel enough between one level and the next. For example, A-level research is “both broad and deep” and “sheds new light on the topic”, while C-level research “advances the main idea but is not particularly showing”. While these might both work for me, I don’t think they really help students understand the varying levels of quality. In addition, several of my descriptors could be broken up into separate bullets for clarity. Based on these observations, I revised my rubric for as follows:

5: Exemplary 3: Satisfactory 1: Unsatisfactory
  • Thesis is well-developed and insightful.
  • The paper looks at the topic from a sophisticated perspective.
  • Showing details from research consistently advance the thesis.
  • Research comes from a variety of appropriate sources.
  • Thesis is developed, though the details could probe deeper.
  • The paper looks at the topic from a limited perspective.
  • Details from research advance the thesis, but they are not particularly showing.
  • Research comes from sources that are usually appropriate, but lacking in variety.
  • Thesis is superficial or undeveloped.
  • The paper looks at the topic from an irrelevant perspective.
  • Details rarely come from research.
  • Research comes from sources that are usually inappropriate.

The new version, I think, is much better at letting students know the difference between high and low quality research writing. Although the rubric still contains some language that might require clarification for students, that will come once we implement strategy 2.


I originally thought that I could digest all of Chapter 2 in just one post, but here I am only through the first strategy and my word counter is already pushing 2,000. So, in hopes of keeping my readers awake, I’ll save the rest of chapter 2 for a later post. Stay tuned for my practical applications of the second strategy of assessment for learning: Use Examples and Models of Strong and Weak Work.

Until then, as always: post your thoughts, your challenges, and your own examples in the comments.

6 thoughts on ““Seven Strategies of Assessment for Learning” in Action – Part 2

  1. That’s a lot of information! I think I really need to buy this book. I have a quick question about the descriptive language used in rubrics, however. If you’re writing the rubric for students, as opposed to instructors, how simplistic should the vocabulary be? Should you challenge students to define words like “cohesion” in the lower level English classes? Just a thought I’ve had while working on rubrics.

    Like

    • It’s definitely a worthy buy. I think every department should have a copy.

      In response to your question, Chappuis gives 3 methods for communicating learning targets, and only one of them really focuses on student-friendly language. Rubrics, in order to be concise enough, are generally going to use some of the jargon of the academic discipline. However, the process of getting students to internalize the learning target will need to also help them define that jargon. The ultimate goal, after all, is to get students to have the same understanding of quality as the teacher. So, ideally, a well-trained student would eventually come to know what “cohesion” is and be able to identify writing that is coherent and writing that isn’t.

      Strategy 2, which I’ll cover in my next post, talks about using model work to teach students learning targets. One activity in particular is giving them models, and using the models to help them understand the rubric. This act will get students to start using the jargon of their discipline.

      Liked by 1 person

      • Thanks, Ish! That actually clears this up a lot. I think it’s really hard for beginning writers to evaluate writing without experience. Hopefully, armed with a few of these strategies and perspectives, I can try to combat this!

        Like

  2. Pingback: “Seven Strategies of Assessment for Learning” in Action – Part 3 | Ask CCIT!

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s