Sunday, February 13, 2022

Assessments and being evaluated





An element of an active academic life consists of assessing others and being assessed. It runs like a common thread from the first exam as an undergraduate to the expert opinions of the retired professor. But how does it actually work? How do you know whether something is better than something else? What is scholarly quality and what makes someone qualified to assess this?

Apart from grading students, I have in the past few years examined PhD program applications, journal articles, research applications and served as external examiner. I have not received any formal training in assessing and evaluating historical scholarship. Or perhaps I have. I have read, analyzed and evaluated scholarly texts since at least the early undergraduate level. I have participated in critical discussions around seminar tables. What’s this if not formal training in scholarly evaluation?

The seminar culture in which I was trained reached a new level when I was accepted to the PhD program. Sure, the PhD program contained much coursework. However, the higher seminar of the Department of History is where I learned the most. In this setting, you were expected to be present and say things – even though the seminar didn’t offer any credits or you were formally obligated to attend. In my PhD generation, there was a strong sense of loyalty toward the seminar. Being there was more important than sitting in front of the computer writing or in the archive digging.

Then as now, I greatly appreciated general seminar skills. I particularly appreciated those who didn’t always repeat the same thing over and over and who were also capable of giving good comments on things outside their own area of expertise. This requires both intellectual flexibility and an ability to perform empathetic readings of texts that don’t really interest you. In order to do so, you need to approach texts on their own terms and be able to distance yourself from your own worldview. Being able to think on your feet doesn’t hurt either.

These are all typical “invisible skills.” They don’t appear in publication lists, CVs or portfolios over teaching experience. They may, however, sometimes be seen between the lines in books and articles. But far from always. One is often surprised by how unremarkable renowned academics can be when operating outside their own comfort zone. On other occasions, you are amazed at how skillfully a master’s student discusses something he or she doesn’t know all that much about.

Somewhere here is my basic view on what constitutes scholarly quality. Perhaps one could specify it as a kind of intellectual elasticity and a general scholarly ability to assess and evaluate things. Over the years, I have come to realize that this is only one basic view among many. Other academics value completely different things the most: creative choices of subject matter, vast and hard-to-access archival materials, ability to write and how to engage in certain types of theoretical reasoning. Some academics value things close to what they do highly. Others adopt the opposite perspective. They expect even more from academics operating in proximity to their own field.

This insight into this kind of value pluralism has over time made me more humble with regard to my own basic view. How can I be sure that it’s all that good? And, by the way, who’s to say that all academics should be good at the same things? The field of history wouldn’t have been particularly interesting if it only consisted of masters of seminars. We also need academics digging in archives and others driven by theoretical curiosity. We need excellent writers and people who can count. Without the diversity of different types of academics, the seminar culture would quickly turn into some sort of self-referential intellectual exercise. An echo chamber.

Another way of thinking about evaluations and assessments is to actually study this phenomenon. People outside the academy might perceive this as the pinnacle of navel-gazing. But for those of us who work in the academy, this is extremely interesting. My own favorite study in this field is How Professors Think (2009) by sociologist Michèle Lamont. In this book, she carries out ethnographic studies on a number of multidisciplinary research councils distributing prestigious grants. She participates in their meetings and interviews the evaluators before and after. She is particularly interested in how professors assess each other’s efforts as evaluators. What brings respect? What makes you trust someone’s judgment? What makes you lose confidence in someone?

This post is not the place to summarize her findings. But the book is definitely worth reading in its entirety. It’s a great qualitative study offering many general insights. It also gives the reader a chance to be a fly on the wall in one of the rooms where your own applications actually end up. Because even if you only see acceptance letters, rejection letters and, sometimes, the arguments made for or against you, these decisions are obviously made by people. But they are not made individually but in groups. A bit like a seminar.


Further reading: "The role of writing applications in the research process" and "What academics can learn from players"

---------
Do you want to sign up for the blogs mailing-list? Send an e-mail to david.larsson_heidenblad@hist.lu.se

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.