Poonam Pandey and peer-review
One dubious but vigorous narrative that has emerged around Poonam Pandey’s “death” and subsequent return to life is that the mainstream media will publish “anything”.
To be sure, there were broadly two kinds of news reports after the post appeared on her Instagram handle claiming Pandey had died of cervical cancer: one said she’d died and quoted the Instagram post; the other said her management team had said she’d died. That is, the first kind stated her death as a truth and the other stated her team’s statement as a truth. News reports of the latter variety obviously ‘look’ better now that Pandey and her team said she lied (to raise awareness of cervical cancer). But judging the former news reports harshly isn’t fair.
This incident has been evocative of the role of peer-review in scientific publishing. After scientists write up a manuscript describing an experiment and submit it to a journal to consider for publishing, the journal editors farm it out to a group of independent experts on the same topic and ask them if they think the paper is worth publishing. (Pre-publishing) Peer-review has many flaws, including the fact that peer-reviewers are expected to volunteer their time and expertise and that the process is often slow, inconsistent, biased, and opaque.
But for all these concerns, peer-review isn’t designed to reveal deliberately – and increasingly cleverly – concealed fraud. Granted, the journal could be held responsible for missing plagiarism and the journal and peer-reviewers both for clearly duplicated images and entirely bullshit papers. However, pinning the blame for, say, failing to double-check findings because the infrastructure to do so is hard to come by on peer-review would be ridiculous.
Peer-review’s primary function, as far as I understand it, is to check whether the data presented in the study support the conclusions drawn from the study. It works best with some level of trust. Expecting it to respond perfectly to an activity that deliberately and precisely undermines that trust is ridiculous. A better response (to more advanced tools with which to attempt fraud but also to democratise access to scientific knowledge) would be to overhaul the ‘conventional’ publishing process, such as with transparent peer-review and/or paying for the requisite expertise and labour.
(I’m an admirer of the radical strategy eLife adopted in October 2022: to review preprint papers and publicise its reviewers’ findings along with the reviewers’ identities and the paper, share recommendations with the authors to improve it, but not accept or reject the paper per se.)
Equally importantly, we shouldn’t consider a published research paper to be the last word but in fact a work in progress with room for revision, correction or even retraction. Doing otherwise – as much as stigmatising retractions for reasons not related to misconduct or fraud, for that matter – on the other hand, may render peer-review suspect when people find mistakes in a published paper even when the fault lies elsewhere.
Analogously, journalism is required to be sceptical, adversarial even – but of what? Not every claim is worthy of investigative and/or adversarial journalism. In particular, when a claim is publicised that someone has died and a group of people that manages that individual’s public profile “confirms” the claim is true, that’s the end of that. This an important reason why these groups exist, so when they compromise that purpose, blaming journalists is misguided.
And unlike peer-review, the journalistic processes in place (in many but not all newsrooms) to check potentially problematic claims – for example, that “a high-powered committee” is required “for an extensive consideration of the challenges arising from fast population growth” – are perfectly functional, in part because their false-positive rate is lower without having to investigate “confirmed” claims of a person’s death than with.