Tim Burke writes of his grading:

I'm not terribly consistent in my internal understanding of what I'm doing when I grade. In general, I tend to imagine the B as the default grade, and an A as a grade that says, "You did something considerably better than ordinary". The C means, "This is really not as good as ordinary work". Failures are either, "This is dramatically worse than the norm" or "You blew this off, and I can see that you did".

I freely confess that I tend to have a slightly different understanding of how this scaling works out based on my understanding of what a student is capable of. The more I've graded a student, the more I form an expectation about what they can do. A student who has done consistently excellent, original work for me is likely to draw a much more negative reaction from me for doing ordinary work than a student who has done fine, decent but undistinguished work consistently. If I graded blind, I suspect I'd still have some pretty good guesses over time about the identity of writers, but maybe that would help shake up some of my assumptions. I'm weighing trying to do that next year for the first time.

I'm of two minds on this.  The purpose of a grade is to show mastery (or not) of some volume of material.  Is it fair to set the bar higher for me than for someone who isn't as capable?  Or vice versa?  Is it fair to send the signal to employers that I wasn't up to scratch even when I did objectively better work than some other student?

Maybe.  After all, one of the things that employers and graduate schools are presumably looking for is ability to exert oneself consistently.  Still, doesn't this penalize students who develop a relationship with a professor?

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.