The SEO of academia?

Michael Burowoy describes one of the current prevailing models of the university as “the regulation model,” which aims to make the university production of knowledge “more efficient, more productive and more accountable by more direct means.” In other words, to make what universities do (supposedly, “produce knowledge”) equivalent to what industries do (“produce computers,” “produce financial instruments”), and to improve that activity according to equivalent metrics: efficiency, productivity, accountability.

This requires some means of measuring what it is that all those professors, post-docs, and grad students actually DO. And since they don’t generally make physical stuff, or much money, one option is to quantify their intellectual output somehow through publications–one of the more tangible creations expected of professional academics.

All of this is to say that how we measure academic success is an important question in the age of austerity. If budgets (and one’s employment status) depend on meeting quantitative targets, people will probably alter their behavior accordingly. Which brings me to some questions about Impact Factor!*

As I understand it, Impact Factor (IF) aims to measure the relative importance of a journal through how often its articles are cited. Bjoern Brembs summarizes some research suggesting that IF is actually a better predictor of a paper’s chance of being retracted than of its being cited. There’s some evidence this may be because journals with a high IF are more likely to publish flawed studies.

IF is also predictive of the sample size of the gene association study: the higher the IF of the journal in which the study was published, the lower the sample size. One could interpret this as evidence that high-IF journals are more likely to publish a large effect, even though it is only backed up by a small sample size, while low-IF journals require a more solid amount of data to back up the authors’ claims.

If this is the case, it provokes a few thoughts.

Citations are a bit like pageviews or linkbacks: you may get more with extreme claims and controversy, even if the quality of your work isn’t great. Simply counting citations, without any attention to the context in which work is cited, is a pretty shallow measure of importance. Studies are often cited in order to disagree with or refute their conclusions.

However, if measures like IF continue to be important, especially when it comes to budgets, we can probably expect more universities and researchers to game the system. And if all we’re measuring when it comes to “impact” is how often their work gets mentioned, we may be setting ourselves up for Huffington Post-style SEO academic journals.

Setting aside the long-term solution of reining in the regulation model and ending measurement mania, what kinds of metrics might be devised that would better measure real intellectual contributions, and that would create incentives to conduct high-quality research?

Advertisements

Today in bad science reporting

Oh, dear. The New York Times is a frequent offender when it comes to misrepresenting or misunderstanding social science. Today it’s John Tierney, looking (he claims) to anthropology for help understanding gifting.

It was a good thought. Making sense of gifts has been an important topic in cultural anth since the 19th century, and Tierney goes straight to one of the classic case studies, the potlatch.

Unfortunately, he makes it immediately obvious that he has no idea what anthropological research is even about.

After looking at anthropological research into the potlatch, and talking with a Kwakwaka’wakw Indian chief who carries on this gift-giving ritual in British Columbia, I concluded that lavish presents are essential to social harmony.

Um, no. The notion that potlatch helps maintain “social harmony” is only one among many, and more than a little outdated at that. Societies are messy–harmonious in some ways, discordant in others; contemporary social analysis takes this into account, no longer assuming that the function of cultural practices is to help things run smoothly.

We might study potlatches to better understand Kwakwaka’wakw society. We could use an analysis of potlatching in a broader comparison of the many forms of economic organization around the world, to help show how behavior that appears irrational from a standard economics perspective has its own logic, or to open a discussion of how cultural practices has changed through colonial encounters. Anthropologists have devoted a hefty amount of scholarship to potlatch, for a whole host of purposes. Tierney clearly hasn’t actually looked at much research on the subject, at least nothing published after the 1960s.

But that’s not all!

But now this idea has been tested not only in the lab but also at Amazon.com, and it looks as if the zealous shoppers have been kidding themselves. Spending extra time and money for the perfect gift may make them feel better, but it’s not doing much for the objects of their efforts, according to one of the experimenters, Francis J. Flynn, an organizational psychologist at Stanford University.

Of all the things one might learn from studying potlatches, perhaps the most important one I would recommend to Tierney is this: the potlatch is not Christmas. It is not wedding gifts. Psychology research conducted on US gift recipients (and a study population almost certainly not comprised of potlatch-practicing American Indians) has basically nothing to do with anthropological analysis of the potlatch.

So first Tierney misrepresents anthropological research, then he claims this strawman has been disproven by psychology. I’m sure he merely intended to add a clever hook to his article. But it would be awfully nice if journalists actually did some of the research reading they claim.

Russia and public opinion

I’ve been starting to think about bigger themes related to the Russian parliamentary elections, so hopefully a longer post to follow. Right now I just wanted to suggest that this

Opposition groups in Russia have vowed to continue holding unsanctioned demonstrations to protest the results of the December 4 parliamentary elections despite the detention of some 560 protesters in Moscow and another 250 in St. Petersburg less than 24 hours ago.

may have something to do with the (for me) curious question of why the leadership of a fairly authoritarian state ever bothers to appeal to the public–through rhetoric about caring for families and pensioners, through policies to provide housing subsidies, and so on.

As a political leader who has routinely justified the ruling party’s seizure of power on grounds of stability and order, would you really want your citizens and visiting foreigners seeing this on the central street of a major city (here, St. Petersburg)?

Woman surrounded by demonstrators flips off police

Elections do seem to matter, even when the outcome is essentially preordained. The question is, how do they matter, and why?

ETA: Shortly after I posted this, @KermlinRussia sent this marvelous tweet:

KermlinRussia Пeрзидент Роисси
Московская мэрия не стала DDos’ить Площадь Революции.