The accepted route for the dissemination of research findings is
through their publication in peer-reviewed research articles. This is an
excellent system in many regards - other researchers in the field are
given an opportunity to assess whether the methods used were
appropriate, the experiments well-controlled, the interpretation of the
results consistent with the data, and that the study really adds to the
current understanding.
But for all its advantages, pre-publication peer-review does not tell us one simple thing: can the experiments be replicated? All too often, researchers from other labs have difficulties replicating published findings. Usually, this is because of subtle differences in the methodology used between labs. Less frequently, the published data may have arisen from an honest mistake; a malfunctioning piece of equipment say, or a forgotten step during the statistical analysis. Sadly, there is a third possibility, which despite its rarity, gets all the attention: the spectre of fraud.
Peer-review is not a safe-guard against fraud, as exemplified by the appalling case of Diederik Stapel. Currently, one's scientific productivity is gauged by publication output, and as such, there is enormous pressure to publish work in top quality journals. One unfortunate consequence of this is that researchers (and editors) are reluctant to retract published work when errors (innocent or otherwise) come to light. The competitive ethos of 'publish or perish', combined with the absence of adequate checks and balances, has allowed the most corrupt and desperate to commit fraudulent acts.
There are other problems with the current system. Currently, it can be several years between a novel discovery and its eventual publication, and the vast size of the literature defeats attempts to keep abreast of developments in divergent fields. Fear of being 'scooped' causes many scientists to become secretive about findings, impeding progress.
I propose a radical solution: the complete abolition of research papers.
The nature of publication is counter to the very concept of science as a process and not a product. An alternative approach might entail the submission of all novel research findings to online databases, with credit being given to the original contributors - provided that other institutions are able to replicate the results. Hence, it would be in everyone's interest to provide detailed methodology, and support other researchers in attempts to replicate their work. Unless the work is shown to be reproducible, it would wither and die on the wiki-vine.
Clearly, this does not apply to all fields. Not everyone has access to a particle accelerator of the scale of the Large Hadron Collider, and many techniques are highly specialised. In these cases, the researchers would be expected to demonstrate their findings to colleagues within the field.
In this way, research would become a more collaborative effort, and fledging lab heads would be better able to compete with their more established peers. Journals would publish periodic review articles based on confirmed findings in the database, written by researchers within each field.
The advantages of such a system are considerable. Data would become an open resource, with authors more willing to present at conferences. Scientists would be rewarded for their continual research outcome rather than punctuated papers, raising morale, and data reliability would be assured. Finally, shorter author lists would make individual contributions more transparent. We must recognize an antiquated system for what it is, and begin to implement the necessary changes.
But for all its advantages, pre-publication peer-review does not tell us one simple thing: can the experiments be replicated? All too often, researchers from other labs have difficulties replicating published findings. Usually, this is because of subtle differences in the methodology used between labs. Less frequently, the published data may have arisen from an honest mistake; a malfunctioning piece of equipment say, or a forgotten step during the statistical analysis. Sadly, there is a third possibility, which despite its rarity, gets all the attention: the spectre of fraud.
Peer-review is not a safe-guard against fraud, as exemplified by the appalling case of Diederik Stapel. Currently, one's scientific productivity is gauged by publication output, and as such, there is enormous pressure to publish work in top quality journals. One unfortunate consequence of this is that researchers (and editors) are reluctant to retract published work when errors (innocent or otherwise) come to light. The competitive ethos of 'publish or perish', combined with the absence of adequate checks and balances, has allowed the most corrupt and desperate to commit fraudulent acts.
There are other problems with the current system. Currently, it can be several years between a novel discovery and its eventual publication, and the vast size of the literature defeats attempts to keep abreast of developments in divergent fields. Fear of being 'scooped' causes many scientists to become secretive about findings, impeding progress.
I propose a radical solution: the complete abolition of research papers.
The nature of publication is counter to the very concept of science as a process and not a product. An alternative approach might entail the submission of all novel research findings to online databases, with credit being given to the original contributors - provided that other institutions are able to replicate the results. Hence, it would be in everyone's interest to provide detailed methodology, and support other researchers in attempts to replicate their work. Unless the work is shown to be reproducible, it would wither and die on the wiki-vine.
Clearly, this does not apply to all fields. Not everyone has access to a particle accelerator of the scale of the Large Hadron Collider, and many techniques are highly specialised. In these cases, the researchers would be expected to demonstrate their findings to colleagues within the field.
In this way, research would become a more collaborative effort, and fledging lab heads would be better able to compete with their more established peers. Journals would publish periodic review articles based on confirmed findings in the database, written by researchers within each field.
The advantages of such a system are considerable. Data would become an open resource, with authors more willing to present at conferences. Scientists would be rewarded for their continual research outcome rather than punctuated papers, raising morale, and data reliability would be assured. Finally, shorter author lists would make individual contributions more transparent. We must recognize an antiquated system for what it is, and begin to implement the necessary changes.
0 comments:
Post a Comment