Wednesday, June 01, 2016

Homology modulo 2

Last week, I was chirping on Twitter about “homology modulo 2”: how closely it matches my geometric intuition of what homology should measure, despite my never having thought seriously about it before, and how its computational simplicity makes it seem like an ideal way to introduce homology to undergraduates, even those who haven’t studied linear algebra. For a very complete graduate-level introduction to homology (and cohomology) modulo 2, check out Jean-Claude Hausmann’s book. I will instead try to demonstrate how this topic can be introduced at nearly any level, with an appropriate amount of care. For the sake of brevity, I will assume familiarity with linear algebra in this post; however, the necessarily elements (image, kernel, rank, row reduction) can easily be learned in the context of homology, particularly when working modulo 2.

Note: This post got long enough in the writing that I didn’t make any pictures to go with it, so you should draw your own! The idea is to discover how algebra can be used to extract geometric/topological information in a way that is really striking when you see it happen.

The space

For simplicity of exposition, I will only consider spaces \(X\) that are created from finitely many convex polytopes (often simplices or hypercubes) by making some identifications (“gluings”) between their faces. The faces are not necessarily joined in pairs, however; more than two faces of the same dimension may be identified, or some faces might not be joined at all. A more careful definition is possible, but to provide one would get away from the fairly non-technical introduction I’m aiming for. Just assume no funny stuff happens, OK? The polytopes that make up \(X\) are called the cells of \(X\); the collection of cells includes all the faces of all the polytopes we started with (some of which, as noted above, have been identified with each other in pairs or larger groupings). Each cell, being a polytope, has a dimension, and if we wish to specify the dimension of a cell as \(k\), we call it a \(k\)-cell.

For example, \(X\) could just be a single convex polytope. Or it could be a convex polytope with the interior removed (keeping in mind that the boundary of a convex polytope is a union of convex polytopes of one dimension lower). The outside of a cube, for instance, is made up of six 2-cells (the faces), twelve 1-cells (the edges), and eight 0-cells (the vertices). A torus, when made from a rectangle by identifying opposite sides, is also such a space, with one 2-cell (the interior of the rectangle), two 1-cells (the result of identifying the edges in pairs), and one 0-cell (because all corners of the square are identified to the same point).

The data

The homology of \(X\) measures the difference between objects in \(X\) that have no boundary (these are called cycles) and objects that are the boundaries of other objects (called, quite sensibly, boundaries). A \(k\)-dimensional cycle that is not a boundary is supposed to “enclose” a \(k\)-dimensional “hole” in \(X\). The formal definitions are intended to quantify what is meant by “boundary;” the intuitive notion of “hole” floats along, generally defying proper definition (and often even intuition).

By “object” in the previous paragraph, we mean something made up from the cells of \(X\). We restrict ourselves to putting together cells of the same dimension, producing objects called chains. That is, a \(k\)-chain is just a collection of \(k\)-cells in \(X\). We can add together \(k\)-chains, but—and this is the beautifully simple part—we add modulo 2. If a particular cell appears twice, then this pair of appearances cancel each other out. The idea is that, since we’re trying to study “holes” in our space \(X\), if one cell appears twice, the pair of copies can be joined up along their common boundary and safely removed. Formally, a \(k\)-chain is a linear combination of \(k\)-cells, with coefficients in the field with two elements, if you find such a formal description helpful.

We now proceed to the key combinatorial data of our space \(X\) and see how it can be used to extract topological information. Because \(X\) is made up of finitely many cells, for each \(k = 1, \dots, n\), we can construct a boundary matrix \(\partial_k\). (Normally \(\partial_k\) would be defined as a linear map between certain vector spaces; we are fully exploiting the equivalence between linear maps and matrices.) The columns of \(\partial_k\) are labelled by the \(k\)-cells of \(X\), and the rows are labelled by the \((k-1)\)-cells. In each column, we put a 1 in each position where the corresponding \((k-1)\)-cell lies in the boundary of the given \(k\)-cell, and a 0 otherwise. Exception. Sometimes the faces of a single \(k\)-cell may be joined to each other, meaning the resulting \((k-1)\)-cell appears with multiplicity on the boundary of that \(k\)-cell. This multiplicity, modulo 2, is taken into account in the boundary matrix. See the boundary matrices of the torus, near the end, for examples of this phenomenon.

A concrete example: the tetrahedron

The boundary matrix, like most computational objects, is best understood through examples. Let’s start with the empty tetrahedron. Label the vertices \(v_1\), \(v_2\), \(v_3\), \(v_4\), and let \(f_i\) be the triangular face opposite \(v_i\). Let \(e_{ij}\) be the edge joining \(v_i\) to \(v_j\), with \(i < j\). Then we have two boundary matrices,

\( \partial_1 = \begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 \end{bmatrix} \)      and      \(\partial_2 = \begin{bmatrix} 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \end{bmatrix}\).
In \(\partial_1\), the columns are labelled by the edges and the rows are labelled by the vertices. In \(\partial_2\), the rows are labelled by the edges and the columns are labelled by the faces. In both matrices, the edges are listed in the order \(e_{12}\), \(e_{13}\), \(e_{14}\), \(e_{23}\), \(e_{24}\), \(e_{34}\). Notice that each column of \(\partial_1\) has two 1s, because each edge has two endpoints, and each column of \(\partial_2\) has three 1s, because each face is bounded by three edges.

Once we have these matrices, we can use them to find boundaries of more general chains. For instance, when joined together, the edges \(e_{12}\) and \(e_{23}\) form a path from \(v_1\) to \(v_3\), so we expect the boundary to be these two points. Indeed, adding together (modulo 2!) the corresponding entries from the first and fourth columns of \(\partial_1\), we see that the 1s in the second entry cancel (which corresponds to the edges being joined at \(v_2\)), and we are left with 1s in the first and third entries. We can write this relation as \(\partial_1(e_{12}+e_{23}) = v_1 + v_3\). Similarly, if we add together the first three columns of \(\partial_2\), which correspond to \(f_1\), \(f_2\), and \(f_3\), the result is a vector with 1s in the first, second, and fourth entries, which correspond to \(e_{12}\), \(e_{13}\), and \(e_{23}\), producing the equation \(\partial_2(f_1 + f_2 + f_3) = e_{12} + e_{13} + e_{23}\). This demonstrates that the union of three of the faces has the same boundary as the fourth face. The sum of all four columns of \(\partial_2\) has all 0s for its entries, showing that the four faces of the tetrahedron, taken together, have no boundary.

How to extract information from the boundary matrix

Having illustrated some computations with boundary matrices in the above example, let’s codify some definitions. A collection of \(k\)-cells is called a \(k\)-cycle (or closed) if the sum of the corresponding columns of \(\partial_k\) is the zero vector. (This is a formal way of saying “has no boundary.”) A collection of \(k\)-cells is called a \(k\)-boundary (or exact) if it can be obtained as a sum of columns of \(\partial_{k+1}\). In linear algebra terms, a \(k\)-cycle is an element of the kernel of \(\partial_k\), and a \(k\)-boundary is an element of the image of \(\partial_{k+1}\). Again, the benefit of working modulo 2 is that these conditions can be easily checked. The set of \(k\)-boundaries is denoted \(B_k\), and the set of \(k\)-cycles is denoted \(Z_k\) (the notation \(C_k\) generally being reserved for \(k\)-chains).

A fundamental property is that \(\partial_k \partial_{k+1} = 0\), which has the satisfying geometric interpretation that “every \(k\)-boundary is a \(k\)-cycle,” or \(B_k \subseteq Z_k\). This property can be checked directly in the above example of the tetrahedron. In general, it applies because, in a \(k\)-dimensional polytope, each \((k-2)\)-dimensional face appears in two \((k-1)\)-dimensional faces (provided \(k \ge 2\); if \(k=1\), then there are no \((k-2)\)-dimensional faces, so \(\partial_0 = 0\), and the property \(\partial_0 \partial_1 = 0\) holds trivially). From the perspective of homology, this means boundaries aren’t “interesting” cycles. They’re the boundaries of something, after all, so they certainly don’t enclose a “hole.”

What we really want to measure, then, is how many cycles are not boundaries. To determine this, we first need to find out how many cycles and how many boundaries there are. Except we can add cycles together to get new cycles (in linear algebra terms, the kernel of a matrix is a subspace of the domain), and we can add boundaries to get new boundaries (the image of a matrix is also a subspace), so what we really want is to know how many independent cycles there are: that is, we want the dimension or rank of the set of cycles and the set of boundaries. I’ll use rank here, even though we’re working with vector spaces, because that terminology transfers to the case of integral homology.

The rank of the \(k\)-boundaries is the rank of \(\partial_{k+1}\), because by definition this describes the maximal number of independent boundaries of \((k+1)\)-chains. On the other hand, the rank of the \(k\)-cycles is the nullity of \(\partial_k\), because this measures the maximal number of independent \(k\)-chains with no boundary. From linear algebra, we know that the rank of a matrix can be determined by row reducing to echelon form and counting the number of rows (equivalently, columns) that have leading ones.

Homology gets its name from the notion of homologous cycles (“homologous” meaning, etymologically, “having the same position or structure”). Two \(k\)-cycles are homologous if their difference is a \(k\)-boundary. Modulo 2, the difference of two objects is the same as their sum, so this just means that two cycles are homologous if, when we put them together, they form the boundary of an object of one higher dimension. Boundaries are “homologically trivial” because, by definition, they are homologous to the chain consisting of no cells, \(0\). The \(k\)th homology of \(X\) is the quotient (group, vector space, module, etc.) of the cycles and the boundaries: \[ H_k = Z_k/B_k. \] The associated numeric invariant is the \(k\)th Betti number \(\beta_k\) of \(X\), which is the rank of the \(k\)th homology. It can thus be computed as the difference between the rank of the \(k\)-cycles and that of the \(k\)-boundaries: \[ \beta_k = \mathrm{rank}\,Z_k - \mathrm{rank}\,B_k. \] This is the number that “counts” the “\(k\)-dimensional holes” in our space \(X\). Note that this is an ordinary natural number, not an integer modulo 2. However, when working modulo 2, the Betti numbers entirely determine the homology, up to isomorphism. (In ordinary, integral homology, this is not the case: homology may have “torsion” elements, while the Betti numbers only count the “free” part of homology. The integral homology determines the mod 2 homology, but the reverse is not true, so homology modulo 2 is undoubtably “weaker,” and there are certainly times one would want the full theory. However, I hope this post is illustrating the benefits of using homology modulo 2 as a shortcut for introducing the key concepts.)

Examples of homology

Let’s return to the example of the tetrahedron. Using \(\sim\) for row equivalence, we have

\( \partial_1 \sim \begin{bmatrix} 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \)      and      \(\partial_2 \sim \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\).
The rank of both matrices is \(3\). The nullity of the first matrix is \(6 - 3 = 3\), and the nullity of the second matrix is \(4 - 3 = 1\). Thus we have \[ \mathrm{rank}\,Z_1 = 3, \qquad \mathrm{rank}\,B_1 = 3, \qquad \mathrm{rank}\,Z_2 = 1, \qquad \mathrm{rank}\,B_2 = 0, \qquad \] the last quantity following from the fact that there are no 3-cells in the empty tetrahedron. We also know that \[ \mathrm{rank}\,Z_0 = 4, \qquad \mathrm{rank}\,B_0 = 3, \qquad \] with the first of this pair of equations coming from the fact that a point has no boundary. The Betti numbers of tetrahedron are \[ \beta_0 = 4 - 3 = 1, \qquad \beta_1 = 3 - 3 = 0, \qquad \beta_2 = 1 - 0 = 1. \] Here is a geometric interpretation of these numbers, in reverse order.
  • The equation \(\beta_2 = 1\) means that there is one independent 2-cycle which is not a boundary. The reduced form of \(\partial_2\) shows that this cycle is \(f_1 + f_2 + f_3 + f_4\), i.e., the sum of all the faces of the tetrahedron. Thus, when we take all the faces together, the result is a closed cycle, and no other combination of faces has an empty boundary. Roughly speaking, the entire tetrahedron encloses a “hole.”
  • The equation \(\beta_1 = 0\) can be read as “every 1-cycle is a 1-boundary.” A stronger form of this statement is that the tetrahedron is simply connected—every loop can be contracted to a point, or every closed loop on the tetrahedron is the boundary of something 2-dimensional. Roughly speaking, there are no holes on the surface of the tetrahedron.
  • The “holes” measured by the 0th homology are of a somewhat different type. Generally speaking, the Betti number \(\beta_0\) measures the number of connected components. Because any point has no boundary on its own (hence is a 0-cycle), two vertices are are boundary if and only if they can be joined by a path of edges. Thus the equation \(\beta_0 = 1\) simply means that the tetrahedron is connected.

Now let’s turn to the example of the torus, formed from a rectangle by identifying opposite sides. This space has one 2-cell \(f\) (the interior of the torus), two 1-cells \(e_1\) and \(e_2\) (the edges of the rectangle, after being identified in pairs), and one 0-cell \(v\) (all four vertices of the rectangle become a single point on the torus). Each edge \(e_i\) appears twice on the boundary of \(f\), and the vertex \(v\) appears at both ends of each edge, so the boundary matrices are \[ \partial_1 = \begin{bmatrix} 0 & 0 \end{bmatrix}, \qquad\qquad \partial_2 = \begin{bmatrix} 0 \\ 0 \end{bmatrix}. \] Thus every \(k\)-chain has an empty boundary for \(k = 0, 1, 2\), and the rank of the \(k\)-cycles equals the number of \(k\)-cells. The interpretations of \(\beta_0 = 1\) and \(\beta_2 = 1\) are the same as in the case of the tetrahedron. In this case, the equation \(\beta_1 = 2\) tells us there are two different, independent 1-cycles, which can be represented by a latitude circle and a longitude circle on the torus.

Footnote on topological spaces

A few words justifying the restriction to polytopal complexes: When I was in Hatcher’s algebraic topology class, he chose to introduce cellular homology first so that we could get to computations quickly; later he introduced singular homology mainly to prove that the homology groups only depend on the underlying topological space. It thus seems entirely reasonable to me, for purposes of introduction, to work directly with CW complexes. The appendix to Hatcher’s book is a standard reference for learning about CW complexes, but in practice a CW complex usually means a topological space that is assembled from convex polytopes, attached along their faces.

Another introductory source on homology for undergraduates

I recently came across Peter Giblin’s book Graphs, Surfaces and Homology, which provides a very thorough introduction to its eponymous topics with only the prerequisite of linear algebra. However, like most treatments of homology, it first deals with integral homology, then comes around to homology modulo 2 late in the book, in Chapter 8, specifically to deal with non-orientable (or at least unoriented) surfaces and simplicial complexes. Gibson describes the theory of homology modulo 2 as “satisfactory” but “weaker than the theory with integral coefficients,” which is absolutely true.

However, if one’s goal is either to learn about homology quickly or to study new spaces (rather than, say, to prove the classification of surfaces), then I think homology modulo 2 is perfectly sufficient, particularly since the contemporary field of persistent homology, applied to study data sets in large dimensions, often works with homology modulo 2. (See this survey, or the remark on page 7 of this overview, for instance.)

Saturday, May 21, 2016

Snell and Escher

A few weeks ago, Grant Sanderson posted a video on the brachistochrone, with guest Steven Strogatz.


The video explains Johann Bernoulli’s solution to the problem of finding the brachistochrone, which is a clever application of Snell’s Law. I immediately wondered if a similar application could be used to explain the behavior of geodesics in the hyperbolic plane, which it turns out is true. I’m not the first to think of this, but it doesn’t seem to be well-known, so that’s what I’ll try to explain in this post. This may become my standard way of introducing hyperbolic geometry in informal settings, i.e., when formulas aren’t needed. (As an example of another exposition that describes hyperbolic geodesics this way, see the lecture notes for this geometry course.)

Snell’s Law, as represented in the above diagram (image source), applies to light traveling from one medium to another, where the interface between the two is horizontal. If light travels at speed \(v_1\) in the first medium and \(v_2\) in the second medium, and its trajectory meets the interface at an angle of \(\theta_1\) and leaves at an angle of \(\theta_2\) (both angles measured with respect to the vertical), then \[ \frac{\sin\theta_1}{v_1} = \frac{\sin\theta_2}{v_2}. \] This is the case of two distinct media. Snell’s Law has a continuous version (derived from the discrete one by a limiting process, as suggested in the video). Suppose light is traveling through a medium with the property that the speed of light at each point depends on the vertical position of the point. That is, the speed of light in this medium at a point \((x,y)\) is a function \(v(y)\), which may vary continuously. At each point of a trajectory of light in this medium, let \(\theta\) be the angle formed by the direction of the trajectory (i.e., the tangent line) and the vertical. Then the quantity \[ \frac{\sin\theta}{v(y)} \] is constant along the trajectory.

So suppose we are looking at a medium that covers the half-plane \(y > 0\), in which light travels at a speed proportional to the distance from the \(x\)-axis: \(v(y) = cy\). (The constant \(c\) may be thought of as the usual speed of light in a vacuum, so that along the line \(y = 1\) light moves at the speed we expect. As we shall see, this is analogous to the fact that distances along the line \(y = 1\) in the hyperbolic metric match Euclidean distances. Of course, it also means that light moves faster than \(c\) above this line, which is physically impossible, but we’re doing a thought experiment, so we’ll allow it.) If we imagine someone living inside this medium trying to look at an object, what direction should they face?

From our outside perspective, it seems that the observer should look “directly at” the object, in a straight (Euclidean) line. However, in this medium light does not travel along Euclidean line segments, but instead along curved arcs, as illustrated below.

Click on the graph to go to an interactive version.

It’s not too surprising that light follows a path something like this if it’s trying to minimize the time it takes to travel from the object to the observer: the light travels faster at higher vertical positions, so it’s worth going up at least slightly to take advantage of this property, and it’s also worth descending somewhat sharply so as to spend as little time as possible in the lower, slower regions.

What may come as a surprise is that the path of least time is precisely a circular arc. With Snell’s Law, however, this fact can be derived quickly. We have that \(v(y) = cy\), and so along a light trajectory \[ \frac{\sin\theta}{cy} = \text{constant}. \] Multiplying both sides by \(c\), we find that \(\frac{\sin\theta}{y}\) is also a constant. If this constant is zero, then \(\theta = 0\) constantly, so the path is a vertical segment. Otherwise, call this constant \(\frac{1}{R}\). Then \(y = R \sin\theta\). Now set \(x = a + R \cos \theta\). The curve \[ (x,y) = (a + R \cos\theta, R \sin\theta) \] parametrizes a circle centered at \((a,0)\) by the angle between the \(x\)-axis and the diameter. It remains to see that this angle \(\theta\) is the same as the angle between the vertical direction and the tangent line at the corresponding point of the circle. This equality can be shown in any number of ways from the diagram below.

Click on the graph to go to an interactive version.

This is not to say that this parametrization describes the speed at which light moves along the path. As previously observed, light slows as it approaches the horizontal boundary, that is, the \(x\)-axis.

But perhaps we’ve been prejudiced in assuming our perspective is the right one. We’ve been looking with our Euclidean vision and supposing light moves at different speeds depending on where it is in this half-plane. Thus it seems to us that light covers Euclidean distances more quickly the further it gets from the \(x\)-axis. But relativity teaches us that distance isn’t absolute: instead, the speed of light is what’s absolute. So perhaps we could gain greater insight by measuring the distance between points according to how long it takes light to travel between them. That is, we assume that the paths determined above are the geodesics of the half-plane, and by doing so we learn to “see” hyperbolically. Then we are not troubled by looking at an image like

(image source) and being told that all of the pentagonal shapes are the same size, because we’ve learned to look at things with our hyperbolic geometry glasses on.

M. C. Escher illustrated (or, more accurately, approximated) the hyperbolic geometry of the upper half-plane with his print Regular Division of the Plane VI (1958), shown below (image source).

This design was created during a time Escher was attempting to visually depict infinity. It was shortly before he had encountered the Poincaré disk in a paper by Coxeter, which discovery led to the Circle Limit series. In this print, the geometry of each lizard is Euclidean, structured around an isosceles right triangle. Each horizontal “layer” has two sizes of triangles, one scaled down from the other by a factor of \(\sqrt{2}\). The side lengths of the triangles in one layer are one-half of those in the layer above, so the heights of layers converge geometrically to the horizontal boundary at the bottom. Some of the triangles are outlined in the next image.

Some questions I have about Escher’s print:

  • How different would this image look if it were drawn according to proper hyperbolic rules, with each lizard having reflectional symmetry, and each meeting of “elbows” having true threefold symmetry? (This would give the tessellation with Schläfli symbol {3,8}, an order-8 triangular tiling.)
  • If we suppose that the right triangles act as prisms, with light moving at a constant speed inside each one, but this speed being proportional to the square root of the triangle’s area, then what will the trajectories of light look like as it moves through the plane? Will they approximately follow circles?
  • How many lizards are in the picture?

Coda: Jos Leys has taken some of Escher’s Euclidean tessellations and converted them to hyperbolic ones, in both the disk and the half-plane model.

Sunday, April 03, 2016

using calculus to understand the world

In my last post, I wrote about how I returned to teaching related rates in my calculus class and ranted a bit about the inanity of most related rates problems. There I mainly discussed the difficulty in reading the statement of such problems and how to make the questions they raise seem more natural. I’d like to expand on this theme with some more examples.

One feature of mathematics that doesn’t get emphasized enough, IMHO, is that it is a science, and as such is based in observation. Often, either we lead students through abstract reasoning to a previously unanticipated result, or we prove things that are so self-evident that the notion they need proof is itself baffling. Now, in the world of professional mathematics, it is true that even apparently obvious facts need proving (remember that “to prove” just means “to test”), and we often do get excited when we are led to something unexpected and beautiful. That is because we have learned how to use and trust our logical skills to examine the truth of something, and we delight in the uncovering of new truth by means of those skills. But even when a result is surprising to an audience, and even if it was at first surprising to the speaker, it is no longer so. A mathematical researcher plays with ideas until she notices something interesting, and then she tries to understand why it is so. That’s the exciting part of math, and that is what I believe we can share with our students through the process of modeling.

My goal in teaching related rates has become to ground as many questions as possible in direct observation. When I ask about the sliding ladder, as I described in my last post, before setting up the math but after asking students what they think will happen, I demonstrate by leaning a ruler against a book and slowly pulling the bottom end away. What one notices in this experiment is that the top end of the ruler moves very slowly at first, and very quickly just before reaching the ground. The speed in the final moment is so great that one is tempted to think the person pulling the bottom end lost control, and gravity took over. (This is even more credible when using the much larger, heavier ladder in a demonstration.) But the math shows that even if the person keeps complete control and moves at precisely the same speed, the same effect will occur. Let’s see why.

The exact length of the ladder doesn’t matter, of course, so call it $L$. If $x$ measures the distance from the wall to the bottom end of the ladder and $y$ measures the distance from the floor to the top of the ladder, then we have $x^2 + y^2 = L^2$. Then we differentiate both sides with respect to time and get $2x\frac{dx}{dt} + 2y\frac{dy}{dt} = 0$, or \[ \frac{dy}{dt} = -\frac{x}{y} \frac{dx}{dt}. \] At this point most related rates problems would ask you about the size of $dy/dt$ for some particular values of $x$, $y$, and $dx/dt$, but look at how much we can determine just from this related rates equation: when $y$ is larger than $x$, the top end is moving more slowly than the bottom end, and conversely when $x$ is greater than $y$, the top end is moving more quickly than the bottom end. There is just one moment when the two ends are moving at the same speed, which is when $y = x$, or in other words, when the ladder is at a 45 degree angle. And as the distance between the top end and the floor approaches zero, the speed of the top end approaches infinity. (Not physically possible, of course, but it explains why there’s such a quick movement at the end of the process.) There; now I feel like I’ve learned something!

I have five more examples to illustrate how much more interesting I think related rates are when tied to direct observation. This will probably belabor the point, but unfortunately these examples are also stripped of any interest by focusing too much on a single moment in time, which is what every standard textbook does with them.

The next example involves inflating a balloon. This, again, is easy to demonstrate. I can’t take a deep enough breath to fill the whole balloon at once, but even with two puffs, exhaled at a near-constant rate, it’s obvious to students that the size (i.e., diameter, or radius) of the balloon grows more quickly at first, then more slowly. Anyone who’s worked with an air or helium tank has surely experienced this phenomenon. Why is this happening? And how much more slowly is the diameter increasing as time goes on? Here there’s very little modeling involved; essentially the entire model is provided by assuming the balloon is a sphere and using the formula for the volume of a sphere in terms of its radius, $V = \frac{4}{3}\pi r^3$. Differentiating with respect to time gives the relation $\frac{dV}{dt} = 4\pi r^2 \frac{dr}{dt}$, so \[ \frac{dr}{dt} = \frac{1}{4\pi r^2}\frac{dV}{dt}. \] Some books reach this equation, but as with the ladder they again jump to plugging in values for a specific time, rather than noting the following: if $dV/dt$ is constant, then $dr/dt$ is inversely proportional to the square of the radius! And even more, $4\pi r^2$ is the surface area of a sphere with radius $r$, so this relationship between rates is directly related to the fact that the derivative of the volume of a sphere with respect to its radius is the surface area! That is, the size of the surface of the balloon is what determines, together with the rate the volume is increasing, how quickly the radius is increasing.

A similar phenomenon happens with the standard filling-an-inverted-cone problem. The demonstration here involves a martini glass and some colored water. (As I promise my students when doing this experiment, it’s just water.) My martini glass is about 12 cm across on top, and about 8 cm deep, giving it a volume of 300 milliliters. (That’s about 10 ounces, the size of two regular martinis—you don’t want to fill this glass with gin and drink it too quickly.) Having a nice big glass is useful for this demonstration: if I pour at a constant rate, the water level rises much more slowly near the top than near the bottom. The math shows just how much more slowly. The volume of a cone with height $h$ and base radius $r$ is $V = \frac{1}{3}\pi r^2 h$. From the geometry of this situation (using similar triangles, for instance), for my glass the radius of the surface of the water is always three-quarters of the water’s depth (here interpreted as height). We could use the relation $r = \frac{3}{4}h$ and substitute into the volume formula to get rid of the variable $r$, but there’s also no harm (as my students taught me) in differentiating first, using the product rule: \[ \frac{dV}{dt} = \frac{\pi}{3} \left( 2rh\frac{dr}{dt} + r^2\frac{dh}{dt}\right). \] Notice that this formula is valid for all cones varying in height, radius, and volume, whether or not the height and radius are linearly related at all times. The most obvious quantity of interest (assuming constant $dV/dt$) is $dh/dt$. From $r = \frac{3}{4}h$ we get $\frac{dr}{dt} = \frac{3}{4}\frac{dh}{dt}$, and also $h = \frac{4}{3}r$. The reason to solve for both of these quantities is that, by keeping both $dh/dt$ and $r$ in the equation and substituting out $h$ and $dr/dt$, we get $\frac{dV}{dt} = \frac{\pi}{3}\left(2r^2\frac{dh}{dt}+r^2\frac{dh}{dt}\right)$, or, after solving for $dh/dt$, \[ \frac{dh}{dt} = \frac{1}{\pi r^2} \frac{dV}{dt}. \] First of all, the ratio between the height and the radius has disappeared, so this formula now works for any inverted cone, not just my martini glass. And second of all, just as with the balloon, the rate at which the height increases depends on the “surface area” that is expanding, which in this case is just the base of the cone! Thus, again, the reason the water level rises more slowly near the top of the glass has a clear geometric interpretation. (Here’s a real-world application: I argue this works to the benefit of bartenders, who can pour into a martini glass fairly quickly without risk of overflowing, because the beverage level rises slowly near the top of the glass.)

I took the next two examples from Cornell’s Good Questions Project; come to think of it, it may be these questions that first planted in my head the idea of looking at related rates problems over time, without numbers. The situations are again standard for related rates problems, but the conclusions are much more interesting than a single rate at a single moment.

Consider an actor (say, Benedict) on a stage, illuminated by a light at the foot of the stage. Benedict casts a shadow on the back wall; how does the length of his shadow vary if he walks towards the light at a constant speed? The demonstration of this situation is particularly exciting, because you get to turn off the classroom lights, pull out a flashlight and a doll or figurine, and watch what happens to the shadow of the doll/figurine/actor on the wall as it moves towards the flashlight. Students observe that at first the shadow grows slowly (when the figure is close to the wall), then more quickly as he approaches the light. Modeling this situation generally provides the first major geometric hurdle for my students, because it involves the imagined line that emanates from the light, passes by Benedict’s head, and finally reaches the back wall, thereby determining the height of the shadow. (I wonder if many of them have never thought about the geometry of how shadows relate to the objects that cast them.) I’ll let the reader work out the fact that, if Benedict’s height is $h$, the distance from the light to the back wall is $D$, the distance from Benedict to the light is $x$, and the height of the shadow is $s$, then $\frac{s}{D} = \frac{h}{x}$. (Hint: use similar triangles.) Here the only variables are $x$ and $s$, so the related rates equation is \[ \frac{ds}{dt} = -\frac{hD}{x^2}\frac{dx}{dt}. \] Students are at first perplexed by the negative sign: shouldn’t the shadow be increasing? If so, why does its derivative appear to be negative? Then they realize: ah, if Benedict is walking towards the light, then $dx/dt$ is negative, so $ds/dt$ is in fact positive! And so it becomes clear that the height of the shadow increases much more rapidly when Benedict is near the light than when he is near the wall. (I generally give specific values for the height of the actor and the distance from the wall to the light in this question, so that it’s more obvious which values are constant.)

I don’t have a standard demonstration for this next problem, because I use it as a quiz question (although maybe not anymore, now that I’ve written about it here), but it’s easy enough to devise an experiment. This situation is similar enough to the previous one that its result is a bit surprising. Suppose a streetlight at height $L$ is the only source of illumination nearby, and a woman (say, Agatha) of height $h$ walks at a constant speed away from the light. As she gets farther away from the light, does her shadow grow more quickly, more slowly, or does it grow at a constant rate? If $x$ again denotes the distance to the light (well, really from Agatha’s feet to the base of the lamp, which is not the same as her distance to the source of illumination), and $s$ is the length of Agatha’s shadow, then similar triangles produce the relation $\frac{s}{h} = \frac{s + x}{L}$. We can rearrange this into a simple proportion between $s$ and $x$: $s = \frac{h}{L-h} x$. (Here’s an interesting feature of this equation already: it only makes sense if $h < L$, that is, if Agatha is shorter than the lamppost!) Now we differentiate to get \[ \frac{ds}{dt} = \frac{h}{L - h} \frac{dx}{dt}. \] So if Agatha’s speed is constant, then her shadow’s length is also increasing at a constant rate. This example shows especially well why it’s dumb to look at related rates at a single moment in time. Most book exercises of this sort ask how quickly the shadow is growing when Agatha is at a particular distance from the lamp. But it doesn’t matter how far away she is, and the math proves that it doesn’t matter.

There’s a risk in related rates exercises to always resort to problems that only involve differentiating polynomials, so here’s an example that uses trigonometric functions. The demonstration I use: I walk back in forth in front of the class and tell the students to be mindful of what their heads do as they follow my movement. After a couple of times, several of them observe that their heads must turn more quickly when I’m closer to them. I point out that this is something anyone who’s had to run a video camera at a race must be aware of. (It’s also apparent to someone riding in the passenger seat of a car, keeping their gaze fixed on a single tree or other immobile object: for a long time, your head turns little, but when you’re close to the object, you have to turn quickly to keep it in view.) I generally set up the problem on the board as though it is taking place at a racetrack. Suppose a runner is moving along a track (let’s assume it’s straight for simplicity) at $v$ feet per second. You’re watching from a position $D$ feet away from the track. How quickly does your head need to turn to keep following the runner? The answer depends on how far away the runner is. One has to introduce a reasonable coordinate system and some useful variables: good choices are the position $x$ of the runner relative to the point of the track closest to you, and the angle $\theta$ by which your head is turned from looking at this closest point. Then we get the relation $\tan\theta = \frac{x}{D}$, and differentiating with respect to time results in the equation $\sec^2\theta \frac{d\theta}{dt} = \frac{1}{D} \frac{dx}{dt}$, or \[ \frac{d\theta}{dt} = \frac{v}{D} \cos^2\theta \] (using the assumption that $dx/dt = v$). When $\theta = 0$, so that the runner is closest to you, the rate at which your head turns is $v/D$, which depends only on how fast the runner is going and how far away from the track you are. (Notice that the units work out: the radian measure of an angle is technically dimensionless, and so we expect its rate of change not to have any dimension other than 1/time. Since $v$ has dimension of distance/time and $D$ has the dimension of distance, $v/D$ has the dimension 1/time.) As $\theta$ increases (in this scenario, $\theta$ is never greater than a right angle), the change in the angle of your head to follow the runner happens more slowly, because $\cos^2\theta$ is closer to zero.

These are just a few examples of standard situations involving related rates that become much more interesting when the myopic attention to a single moment in time is removed. I’m sure most readers of this post can do the calculations I’ve shown on their own, but the tendency to hone in on a single rate at a single point in time is so entrenched that I wanted to show how much more interesting related rates become when that element is removed. I don’t know that my students are better at solving related rates problems than other students, but I have noticed that they’re much less likely to insert specific quantities into a relation before it’s necessary than when I taught the subject years ago. I haven’t had time to strip all such problems of the detritus that comes with wanting a numeric answer, but I believe our understanding (and our calculus students’ understanding) of the world will be much improved by making the effort to transform these problems into meaningful questions.

Here are two other examples that I won’t work out in detail. One scenario has a boat being pulled into a dock by a rope attached to a pulley elevated some distance above the boat. If the rope is pulled at a constant rate, the boat in fact speeds up as it approaches the dock! (I tried demonstrating this once with a string tied to a stuffed animal pulled across a desk, with moderate success.) Another common type of problem considers two boats moving in perpendicular directions (or cars moving along perpendicular roads), and asks at a certain point in time whether the distance between them is increasing or decreasing. That’s silly. Why not establish the relation between them, and ask at what times the distance is increasing, and at what times the distance is decreasing? If there’s a time when the rate of change in distance is zero, then the boats (or cars) are at their closest (or farthest) positions, which connects to the study of optimization, which has its own set of issues…


P.S. I should have known better than to look at Khan Academy’s treatment of related rates. His videos show all the marks of what is classically wrong with these problems: the irrelevant information of what variables equal at a single moment in time is presented up front along with everything that’s constant in the situation, and in the end the answer is a single, uninformative number. Even when an interesting equation is present on the screen, Khan rushes past it to get to the final number. How can we get our students to ask and answer more interesting questions than these, about the same situations?

Tuesday, November 03, 2015

remove the antithesis

Today, for the first time in years, I included a related rates lesson in my calculus class. I had never liked related rates, and when I got my own class and could create my own syllabus, I dropped the topic. This fall I’m at a new school, though, and I decided while revamping my course plans to give related rates another shot.

Background: related rates didn’t sit well with me for a long time before I could enunciate why. Then I learned about the notion of “low-threshold, high-ceiling” tasks, which provide multiple levels of entry for students, as well as a lot of space for growth and exploration. I realized that the classic related rates problems fail both tests. They generally have a high threshold, because students have to understand the entire process of translating the word problem into symbols, then differentiating, then solving, before they have any measurable confidence that they can begin such a problem. And they generally also have a low ceiling, because one the immediate question has been answered, there is no enticement to do further analysis, or even any indication such analysis is possible.

As an example, consider the problem of the sliding ladder. This is included in almost every textbook section on related rates, in almost exactly the following form.

A 10-foot long ladder is leaning against a wall. If the bottom of the ladder is sliding away from the wall at 1 foot per second when it is six feet away from the wall, how quickly is the top of the ladder sliding downward at that instant?
Now, that is an incredibly difficult problem to read. Some books may have slightly better phrasing (I decided not to quote any book in particular, so as not to single out just one malefactor), but the gist is the same. Before you’ve even gotten a sense of what the situation is and what’s changing, you’re asked a question involving bits of data that seem to come out of nowhere, and whose answer is completely uninspiring.

Like I said, for a few years my solution was to avoid these types of problems entirely. I had seen too many students struggle to set up these problems and go through the motions of solving them, only to get a single number at the end that showed nothing other than their ability to set up and solve a contrived problem. What I realized while preparing for today’s class is that, when the problem is done, you don’t feel like you’ve learned anything about how the world works. Calculus is supposed to be about change, yet the problem above feels static because it only captures a single moment in a process. A static answer is antithetical to the subject of calculus. Moreover, most related rates problems arise out of nowhere, flinging information at the reader willy-nilly to answer a single question, despite a decidedly unnatural feel to these questions. Unnatural questions are antithetical to mathematics. So I decided to remove these elements of antithesis as best as I could.

Here is the question I posed at the start of class today.

A 10-foot long ladder is learning against a wall. Suppose you pull the bottom away from the wall at a rate of 1 foot per second. At the same time, the top of the ladder slides down the wall. Does it:
  • slide down at a constant rate,
  • start out slowly, then speed up, or
  • start out quickly, then slow down?
I claim this version is both more natural and easier to start discussing than the near-ubiquitous original. It almost seems like a question one might come up with on one’s own. It’s clear what quantities are changing, and that there is a relationship between them. The process itself can be demonstrated; I used a ruler and a book, rather than bringing a ladder to class. (How would you demonstrate that instantaneous rate of change in the original problem?) No overly specific information is given. And best of all, the answer is a bit surprising, at least to some. (When I asked my students what they thought after a couple minutes of discussion, about half thought the top would start slowly, then speed up, and about half thought it would slide at a constant rate.)

I don’t claim any originality in this idea. Probably many other excellent math teachers have made exactly this change. I may have encountered it as one of Dan Meyer’s examples, or somewhere else, and it stuck in the back of my mind. I should emphasize that I really hated related rates problems, and I saw little chance of rehabilitating them. I’m glad to have realized that they can be interesting and reveal interesting things about the world, when they are restored to the state of natural questions.

I doubt today’s lesson was perfect. I still probably talked too much and introduced symbols too quickly. But it was good enough that I’m going to keep teaching related rates in my calculus classes from now on.

If you have other examples of this better type of related rates problem, please share in the comments!

Saturday, October 11, 2014

geometry at the fair

Last month, West Springfield once again hosted the Eastern States Exposition (or “The Big E”), which brings together fair activities from six states: Maine, New Hampshire, Vermont, Massachusetts, Connecticut, and Rhode Island. It’s great fun to attend, and includes displays of the finest crafts to have competed in county and state fairs from all over the northeastern U.S. in the past year. This means, for instance, that there are a bunch of great quilts.

Symmetry naturally plays a large part in the design of these quilts. The interplay between large-scale and small-scale, and between shapes and colors, creates aesthetic interest. This quilt, for instance, presents squares laid out in a basic tiling pattern (a square lattice). Each square contains a star-shaped figure. The star itself has fourfold dihedral symmetry, which matches the symmetry of the lattice, but the choice of colors in the stars breaks the symmetry of the reflections, resulting in cyclic (i.e., pure rotational) symmetry.

This quilt also shows fourfold dihedral symmetry in the shapes, which is broken into cyclic symmetry by the colors. It hints at eightfold (octahedral) symmetry in some places, but this is broken into fourfold symmetry by the colors and by the relationship of these shapes to the surrounding stars.
This pattern shows fourfold cyclic symmetry at the corners, but that’s not what first caught my eye. The basic tile is a rectangle, which has the symmetry of the Klein four-group (no, not that Klein Four Group). For the two quilts above, I first noticed the large-scale symmetry that was broken at the small scale; here I first saw the limited small-scale symmetry that is arranged in such a way as to produce large-scale symmetry. (I think this is because I tend to notice shapes before colors.)
This quilt uses the square lattice on the large scale, but varies the type of small-scale symmetry. Each square contains the same shapes, but they are colored differently so that sometimes the symmetry is dihedral, sometimes cyclic.
This next quilt is geometrically clever in many ways. It has no reflection symmetries, even disregarding the colors, although the basic shapes that comprise it (squares and a shape with four curved edges, two concave and two convex, for which I have no name Edit 10/15: In an amusing exchange on Twitter, I learned that this shape is described among quilters as an “apple core”) do have reflection symmetries. (I am disregarding the straight lines that cut the curved shapes apple cores into smaller, non-symmetric pieces.) The centers of the squares lie on a lattice that matches the orientation of the sides of the quilt, but the sides of the squares are not parallel to the sides of the quilt. The introduction of curved shapes also acts in tension with the rectangular frame provided by the quilt medium.
Some of the quilt designs rejected fourfold symmetry altogether. Here is one based on a hexagonal lattice:
and another based on a triangular lattice:
(These two lattices have the same symmetries.)

Here is a quilt that stands out. It appears to simply be pixellated:

but if you look closely, you’ll see that the “pixels” are not squares, but miniature trapezoids.
It therefore has no points that display fourfold symmetry. All rotational symmetries are of order two.

All of the types of symmetries of the above quilts (except, perhaps, the one that used some tiles with dihedral symmetry, some with merely cyclic) can be described using wallpaper groups, which I leave as an exercise for the reader.

This next design seems more topological than geometric: it is full of knots and links.

This quilt has an underlying square lattice pattern, but the use of circles again evokes links, at least for me.

It was a surprise to come across a quilt with fivefold symmetry, but it makes perfect sense for a tablecloth.

Finally, this quilt was just gorgeous. The underlying pattern is simple—again a square lattice—but the diagonal translations are highlighted by the arrangement of the butterflies.

As you can see, it was decorated as “Best of Show”. We were particularly happy to see it receive this prize, because we had previously seen it in Northampton’s own 3 County Fair!

Friday, August 22, 2014

formative assessment isn’t scary

I get a little jumpy around nomenclature. This probably comes from being a mathematician; we spend a lot of time coming up with names for complex ideas so that they’re easier to talk about. Naming a thing gives you power over it and all that. So when we come across a new name, it could take anywhere between a few minutes and a few months to unpack it. An abelian group, for instance, can be completely and formally defined very quickly, whereas a rigorous definition of Teichmüller space often takes several weeks in a course to reach. Some things are in between, easy to define but not-so-easy to figure out why the object has a special name (see dessin d’enfant). Very often a major step along the way to understanding something is grasping the simplicity—the inevitability, even—of its definition.

So it is with formative assessment. When I first learned about the formative/summative assessment distinction, I got nervous. I thought, “So, besides giving tests and quizzes, I need to be doing a whole bunch of other things in class to find out what students are thinking? How much more class time will this take? How much more preparation will it take? How will I ever incorporate this new feature into my class, and how bad will it be if I don’t manage to?” I think I got caught up in the impressiveness of the term assessment; that seemed like a big “thing”, and doing any kind of assessment must require a carefully crafted and substantial process.

So let’s back up a bit. In teaching, assessment means anything that provides an idea of students’ level of understanding. If it’s not graded, it’s formative.

That’s it.

As a teacher, unless you have literally never asked “Are there any questions?”, you have done formative assessment. Asking “Are there any questions?” is a crude and often ineffective means of formative assessment, but it is assessment nonetheless. You and I are already doing formative assessment, which means that we don’t have to start doing it; we can instead turn to ways of doing it better. Somehow I find that easier.

“Formative assessment” is more like “abelian group” than “Teichmüller space”. If you have ever added integers, you have worked with an abelian group. But having an easily-grasped definition doesn’t have to mean than a concept is limited. In fact, simple definitions can often encompass a broad range of ideas, which happen to share a few common features. There are entire theorems and theories built on abelian groups. Naming a thing gives you power over it. Now that we’ve named formative assessment, let’s see how we can build on it.

David Wees has a collection of 56 different examples of formative assessment, which range from the “Quick nod” (“You ask students if they understand, and they nod yes or no”—possibly virtually, which enables anonymity) to “Clickers” to “Extension projects” (“Such as: diorama, poster, fancy file folder, collage, abc books. Any creative ideas students can come up with to demonstrate additional understanding of a topic.”) John Scammell has a similar collection of Practical Formative Assessment Strategies (some overlap with Wees’s list), grouped into sections like “Whole Class Strategies”, “Individual Student Strategies”, “Peer Feedback Strategies”, “Engineering Classroom Discussion Strategies”, and so on.

Formative assessment doesn’t have to take much time or preparation. You’re probably already doing it without realizing it. Adding some variety to the methods of assessment, however, can provide a more complete picture of students’ understanding, to their benefit. Feel free to add more resources in the comments.

Tuesday, August 19, 2014

a reflection on course structure, and standards for calculus

Here’s what I’ve learned about writing standards: it’s hard to get them balanced properly. This challenge is inherent in developing any grading system. I used to fret about whether quizzes should count for 15% or 20% of the final grade; now I fret about whether the product, quotient, and chain rules should be assessed together or separately. (I’m happier trying to solve the latter.)

Another challenge is in setting up standards so that assessments have some coherence. I’ll explain. My first couple of times creating standards, I sat down and made a list of all the things I wanted my students to be able to do by the end of the semester, grouped into related sets, with an eye towards having each standard be of roughly equal importance (as I mentioned in the previous paragraph). After all, that’s what standards are, right? All the skills we want students to develop? That done, I told myself, “Okay, now every assessment—every homework, quiz, and test—will have to be graded on the basis of items in this list.” In principle, it’s nice to have this platonic vision of what students should do and know, including all the connections between related ideas (parametrization means imposing coordinates on an object; it doesn’t really matter what dimension it has, so parametrizing curves and surfaces should go together as a single standard). However, while this list said a lot about what I thought students should do, it didn’t say much about what I was going to do. It didn’t fit the structure of the course, just of the ideas (oh, wait, we’re parametrizing curves in week 2 and surfaces in week 10—why didn’t I notice that before?). Looking back, I can see that a lack of contiguousness within a standard does reflect a conceptual distinction between the concepts involved (hmmm, maybe the idea of drawing a curve through space is conceptually different from laying out a coordinate system on a curvy surface). I ended up assessing “partial” standards at various points in the semester, which is absurd on the face of it. It’s one thing to assert that a standard may be assessed at different points in the semester, based on how the skills are needed for the task at hand; it’s another to say, “Well, you’re learning part of a skill now, and I’ll test you on that, and you’ll learn the rest of this same skill later.”

I’ve had fewer slip-ups of this sort as time goes on, but I’ve never quite been happy with how the standards match up with the time spent in class. Both of the problems above keep rearing their heads. So for this fall, I decided to look at the schedule of the class and write standards based on what we do in 1–2 days of class. (Reading this blog post by Andy Rundquist earlier in the summer helped push me in this direction.) If it seemed like too little or too much was getting done in a day, well, that’s an indication that the schedule should be modified. In a semester with 38 class meetings, there should be sufficient time allotted for review, flexibility, and a few in-depth investigations, which leads me to having 25–30 content standards for the course. That’s a few more than I’ve had in the past, but not by many.

Here’s the conclusion I’m coming to: standards both shape and are shaped by the structure of the class. Part of what we as instructors bring to a class is a personal view of how the subject is organized and holds together. If you and I are both teaching calculus, there will be a great deal of overlap in what skills we believe should be assessed, but there will be differences, and we’ll find different dependencies. A fringe benefit of writing out standards is that we can see this structure clearly—even better, I believe, than just by looking at the order of topics. They force us to be honest about our expectations, thereby combatting a certain tendency, observed by Steven Krantz in How to Teach Mathematics, to give tests based on “questions that would amuse a mathematician—by which I mean questions about material that is secondary or tertiary. … In the students’ eyes, such a test is not about the main ideas in the course.” You may want students to use calculus mostly in applied settings where exact formulas for the functions involved are not known, whereas I may be primarily concerned with students’ ability to deal formally with closed-form expressions and to deeply understand classical functions. We can both be right. We should both let our students know what we expect of them, rather than making them guess. In short, standards are not completely standardized—they highlight the commonalities and the particularities among courses that treat basically the same material.

With all that said, here I will share my list of standards for Calculus 1 this semester. Because of the length of the list, I’ll just link to a Google document that contains them: Standards for MTH 111, Fall 2014. They are grouped into twenty-six “Content standards” and three “General standards”. Over time, I’ve settled on these last three as skills that I want to assess on every graded assignment: Presentation, Arithmetic and algebra, and Mathematical literacy and numeracy. These are essential skills for doing anything in calculus, and struggles in calculus can often be attributed to weaknesses in these areas. We’ve all had students who are fine at applying the quotient rule to a rational function, but are stymied when it comes to expanding and simplifying the numerator of the result. That can hamper solving certain kinds of problems, and I want to be able to point to “algebra”, not anything calculus-related, as the area that needs attention. The descriptions of the content standards are shaped in part by our textbook, Calculus: Single Variable by Deborah Hughes-Hallett et al. I like to introduce differential equations fairly early in the course—this follows a tradition at my college, too—so some standards related to that are sprinkled throughout. I should also confess an indebtedness to Theron Hitchman for the language of using verb clauses to complete the sentence “Student will be able to …”

In addition to the 29 standards in the document linked above, I have one more for this class: Homework. Oh, homework. The calls to treat homework purely formatively and to stop grading it (link goes to Shawn Cornally’s blog) have not quite reached the halls of post-secondary education. Many college and university instructors believe homework is so important that they make it worth a substantial fraction of the students’ grades. And it is important, but solely as a means for practicing, taking risks, developing understanding, and making mistakes. (See this video by Jo Boaler* on the importance of making mistakes: “Mistakes & Persistence”.) Grading homework almost always means that its usefulness as a place to take risks is undermined. Last semester I didn’t grade homework at all, although I did have a grader, who made comments on the homework that was submitted. On average, about 1/3 of the class turned anything in. At the end of the semester, I got two kinds of feedback on homework. A few students expressed appreciation that the pressure to make sure that everything in the homework was exactly right was relieved. Several, however, said they realized how important doing homework is to their understanding—often because they let it slip at some point—and urged me to again make it “required”. I want to honor both of these sentiments. I want to encourage students to do the homework and to feel like it is the safest of places to practice and make mistakes, and thereby improvements. So I will count both submissions and resubmissions of homework towards this standard. A student who turns in 20 homework assignments or thoughtfully revised assignments will earn a 4 on this standard, 15 will earn a 3, and so on. I hope this will have the desired effect of giving students maximum flexibility and responsibility in their own learning, while also acknowledging the work and practice they do.

All of the rest of the standards, general and content, will also be graded out of 4 points, with the following interpretations: 1 – novice ability, 2 – basic ability, 3 – proficiency, 4 – mastery. (I’ve adapted this language from that used by several other SBG instructors). At the end of the semester, to guarantee an “A” in the class, a student must have reached “mastery” in at least 90% of the standards (that is, have 4s in 27 out of 30 standards), and have no grades below “proficiency”. To guarantee a “B”, she must have reached “proficiency” in at least 90% of the standards, and “basic ability” in the rest. A final grade of at least “C” is guaranteed by reaching “basic ability” in at least 90% of the standards.

Two other blog posts about standards in college-level math classes went up yesterday:

  • Bret Benesh wrote about his near-final list of standards for calculus 1, and again explained his idea to have students identify for which standards they have demonstrated aptitude when they complete a test or quiz. I really like this idea, as it essentially builds metacognition into the assessment system. I will have to consider this for future semesters.
  • Kate Owens posted her list of standards for calculus 2, which she has organized around a set of “Big Questions” that highlight the main themes of the course. This is particularly important in calculus 2, which can sometimes seem like a collection of disconnected topics. In an ensuing discussion on Twitter, it was pointed out that these kinds of Big Ideas are what can really stick with students, far beyond the details of what was covered.
After reading Kate’s post, I looked at my monolithic list of standards, and attempted to organize them into groups based on three big questions: “What does it mean to study change?” (concepts of calculus), “What are some methods for calculating change?” (computational tools), and “What are some situations in which it’s useful to measure change?” (applications). I was not particularly successful at sorting my standards into these categories, but I like the questions. I may ask the students how they would use the various standards to answer these questions. There are trade-offs in any method of developing a set of standards. I am grateful for these other instructors who are also working on changing how we think about grading and sharing their ideas.

* Jo Boaler’s online courses on “How to Learn Math” are currently open:
For teachers and parents until October 15 ($125)
For students until December 15 (free)