Building on Kaysi Holman’s recent blog post “Rethinking Assessment: Some Questions to Ask Yourself,” I got to thinking about my recent experiences teaching the Introduction to Modern Art survey at Lehman College last semester.
In-class exams based on slide identifications are par for course in art history surveys. In fact, I would go as far as to say it remains the standard in assessing student learning in the art history classroom. Typically, during these exams students are expected to be able to recall some or all of the following:
a. Artist Name
b. Title of Artwork
d. Medium (e.g. oil on canvas, marble sculpture, etc.)
e. Period or Style
f. Significance (1-2 sentences about why this particular artwork is historically important)
Each of these elements is usually weighted equally: half a point for getting the name of the artist correct, half a point for the date, etc. Having proctored (and taken) dozens of exams like this, I never really questioned the logic of this formula. You can’t really know something before you can identify it properly, right? Actually, it depends what you consider to be knowledge. And the answer to that lies less in the unquestioned norms of teaching in the discipline than the benchmarks you decide to set as a teacher based on realistic student expectations and goals. Are you leading an upper class lecture or seminar with mostly art history majors? Then yes, by all means, make sure that they know without a shadow of a doubt that Claude Monet’s Impression Sunrise was painted in 1872. If, like me, you’re teaching a university-wide required humanities class for students who can count on one hand the number of times they have visited a museum, much less an art museum? Overwhelming them with lists of names, dates, and artistic movements to memorize—and then immediately forget—may not be the best way to set up a (hopefully) lifelong engagement with art and museums.
I realized very quickly that the specter of memorizing names and dates was becoming a huge stumbling block in the course, but more than that, it was a barrier for students to engage with the art itself. Rather than considering the significance of a work, they were frantically writing down dates or asking whether a painting would be categorized as Fauvism or Expressionism for the exam.
I made a decision early on that I wouldn’t do away with the format of the in-class exam entirely. Visual analysis does require some thinking on your feet and I strongly believe that analyzing an image in real time is a valuable skill that can have benefits both inside and outside the classroom. Instead, I decided to make the exam open book and restructured the exam itself to focus on qualitative, comparative analysis rather than quantitative data (name, date, movement, style, etc). In other words, I put two images up on a screen and asked students to compare and contrast them both formally and in terms of their historical significance and the motivations of the artists who produced the works. This solution had a double advantage: it relieved student anxiety around the exam itself and allowed them to engage in a more nuanced way with the material. By comparing two artworks and their significance, they were asked to do more than recite what had been taught in class and develop their observations and analysis, which had the added benefit of building confidence in their own critical thinking skills.
I don’t think this revised exam format works in every case. In fact, I’m not sure it would work in most cases! But it did show me the importance of reexamining classroom goals and thinking about how exams can go from being an impediment to a catalyst for student learning.