There’s an interesting post from the political blog The Monkey Cage, which describes a recent scandal with an open access academic journal. Here’s the key part of the story:
Using pseudonyms, Philip Davis and Kent Anderson, claiming to be researchers at the Center for Research in Applied Phrenology (in case you don’t recall what phrenology is, it’s the study of personality through analysis of bumps on the head — and check out the initials of the ostensible center), submitted a paper to The Open Information Science Journal . Note that I didn’t say that Davis and Anderson wrote the article. In fact, the article was written by a computer program, which cobbled together words and phrases that Davis and Anderson provided to form complex, and often bizarre, sentences. The results reported in the paper were phonies, too.
What happened? Davis and Anderson were contacted by the publisher and informed that their paper had withstood its peer review process and was therefore eligible for publication, pending receipt of $800 in fees.
This obviously says a few things about peer review, and the open access publishing system. The angle that interests me is what this says about getting ideas to spread. One of the problems that we always have is that people often have difficulties in judging quality, so they use proxies. One of the proxies that we use in academia is peer review – if an article has passed peer review, we then assume that it is of at least a minimal level of quality.
But really, this is just laziness. We often mistake the proxies we use (peer review as a reflection of quality) for the things they are actually supposed to measure (quality). I think we’d be better off if we got better at judging quality for ourselves. It takes more work, but the payoffs are substantial. And if we’re trying to get our own ideas to spread, we need to be aware that it’s not enough to just come up with something that’s better than what is already out there – we have to get the idea to spread. And this is true for a new academic model, a new product or service, or a new way of doing things. To do that, we need to understand the proxies that people use to substitute for judging quality.
(the image shows ideas spreading on twitter – it comes from this blog…)
I still think that peer-review is a reflection of quality if we keep in mind what quality is and what it’s for.
In this case, I think quality is some absolute property of the work; it is a relative term that reflects some attribute of the work AND the panel that judged that attribute. (Hence “of poor quality” ought to apply as well to the work as to the panel that judged it.)
The other thing we need to consider when ascribing “quality” is what purpose for doing so. Is it to distinguish high quality work from lesser work? or distinguish passable work from the plain medicore? Depending on which, our methods will be different.
This reminds me, for example, of my puzzlement why the undergraduate classes in American universities have an average grade of B (75%) while undergraduate classes in Australian universities have an average grad of C (50%).
My hypothesis is that in American universities, it is really important to distinguish those who achieve passable level from the rest while in Australian universities it is important to distinguish who achieve top level.
I think that journal refereeing is the same. Some journals can argue minutiae very well but cannot distinguish between middle of the road passable work and gibberish while others are the opposite. That’s why when your submission gets rejected, sometimes it is better to submitted to a higher- (than less-) ranked journal.