Sunday, September 02, 2018

prologue to a syllabus

(This was originally supposed to be the post in which I describe my syllabus for the fall. I started writing some preliminary comments, and they got out of control. I’ll get back to the syllabus itself in my next post.)

First, I must express some gratitude. Thanks to parental leave provided by the state of California and my school, I did not have any teaching duties last spring. It was my first time not teaching first-semester calculus in four years. As I tell my students, calculus 1 is actually one of my favorite classes to teach, but I could tell by last fall that some parts of the course were getting stale. Having a semester break meant that, in addition to getting to know my newborn daughter, I could let my ideas on how to improve calculus instruction and assessment simmer for a bit.

Actually, it’s not entirely honest to refer to “my ideas” in this setting; what I really needed was a chance to reflect on ideas I’d been picking up (stealing) from others, and even better, to acquire (steal) some fresh ideas, which a workshop and conference provided over the summer. A fabulous community of college and university math teachers has formed around the question of how to improve our assessment practices, and the rate at which sharing/stealing/developing ideas is remarkably fast.

Over the past few weeks, as the fall semester has started up, several people have shared their syllabi along with extensive, thoughtful commentary on how they created them. I’ve been holding back, however, because while I believe my syllabus is better than it was last year, by the time the semester started I only felt like I had gotten it to “just good enough.” Some ideas aren’t fully developed yet, some feel out of balance, and some are plain risky. Nevertheless, in the spirit of community and maintaining a growth mindset, I’ve decided to go ahead and share my syllabus, too, warts and all.

Since I see this as a long-term work in progress, I’d like to begin with a few words about that progress. (These comments will parallel somewhat my talk from MathFest last month.) I started using standards-based grading in spring 2013, largely as a way to improve the feedback I was giving students. After a reasonably successful first attempt, I began using alternative assessment methods in all of my classes. Some worked better than others, but because I was teaching calculus 1 so often, my SBG system for that class developed into a collection of 25–30 standards that became fairly stable.

Around the same time, Robert Talbert was blogging about specifications grading, a well-developed and flexible framework whose goals, in the words Linda Nilson uses to subtitle her book on the topic, are “restoring rigor, motivating students, and saving faculty time.” For a while I remained skeptical about specs grading, because I couldn’t understand why anyone would turn to something besides my beloved standards. Eventually, however, I realized that SBG as I conceived it didn’t work in every situation, and so I delved more into specs. The Google+ community initiated by Robert goes by the name SBSG, to include both standards-based and specifications grading. Today the language of the community encompasses these and other alternative assessment systems under the broader term mastery grading, which hearkens back to Bloom’s terminology of mastery learning.

At MathFest, I talked a bit about this history of my classes and did some compare-and-contrast between SBG and specs grading. Possibly the most useful contribution I made to that session was the following six-word summary of how they relate:

Standards emphasize content.
Specifications emphasize activity.
Here’s another way to phrase the distinction in my mind: When we create standards, we are answering the question what do we want students to be able to do? When we create specifications, we are answering the question what do we want students to have done? More bluntly, standards are what we want to measure, while specifications are what we can actually measure; the latter is a proxy for the former.

I guess my claim is that standards and specifications support each other: they are two sides of the same coin. We need specifications in order to determine how standards will be assessed, and a clear list of standards keeps specifications from becoming arbitrary. (Or as Drew Lewis said on Twitter, “specs are how I assign letter grades, with the primary spec being mastery of standards.”) Whether I say that an assessment system is based on specifications or standards depends on whether the description of the system focuses on the proxy or the thing for which it proxies.

By last fall, some cracks in my SBG system for calculus had started to show. Every semester, I had a couple of students at the end of the course who still thought it wasn’t clear. The homogeneity of the list of standards was mushing the most important concepts together with secondary ones. Worst of all from a practical standpoint, I was finally getting overwhelmed by reassessments after years of claiming that SBG didn’t take any more of my time than traditional grading. I knew I needed to make some changes to clarify and streamline the assessment process.

What I have for now isn’t perfect, but it will get me through the semester. With this lengthy prologue complete, in my next post I’ll share parts of my syllabus and explain what I hope it achieves.

No comments: