Quality control
A main purpose of the review process is to ensure quality of publications. Thus, reviewers must have the possibility to control the quality of submitted papers. Quality control includes issues like the following:- Is the research question interesting and new?
- Are the conclusions supported by the evidence?
When, in a theoretical paper, authors claim that they have proven a new theorem, we obviously demand a proof. We can go through the details of the proof and either find a mistake or be convinced. Proofs are usually included in the paper or in an appendix to the paper. In short: As far as theoretical findings are concerned, we expect them to be reproducible.
When, in a literature review, authors puts their research into the context of research by other people, again we demand proof, now as a reference to the literature. These references help us when we go to the library and check whether the author's interpretation of the literature is appropriate. For reviews of the literature, we expect these reviews to be reproducible.
When, in the empirical part of a paper, authors claim that the data shows a certain pattern, readers must have the possibility to check whether the conclusions which are drawn by the authors are really warranted. I think that it is not sufficient to rely on the claim that the author has observed some data, prepared the data, applied a method and then gets some results when only these results are available to the reader. The standards for empirical research should be as high as for theory and for literature.
- The data observation process must be reproducible. One way to do this in the context of laboratory experiments is to provide the instructions and, for a computerised experiment, the computer program.
- The data preparation process must be reproducible. I.e. it must be clear how the
raw data was merged together, reshaped, recoded, how outliers were treated, etc.
All this can easily be achieved by providing the raw data and the methods to prepare the data. At least in the case of laboratory experiments there are usually no reasons not to provide the raw data. I also find it highly desirable to have a well documented procedure for the preparation of the data (ideally a computer program that translates the raw data into the prepared data). This makes it straightforward to check whether the raw data really produces the prepared data. Any manual and individual manipulation of the data with the help of spreadsheets etc. is risky, not always reproducible, and should be avoided.
- The application of the method must be reproducible. It must be clear which estimator was really used and with exactly which parameters.
- Sven Vlaeminck and Felix Podkrajac (2017). Journals in Economic Sciences: Paying Lip Service to Reproducible Research? IASSIST Quarterly, 41(1-4), 16.
- Andrew C. Chang and Phillip Li (2021). Is Economics Research Replicable? Sixty Published Papers From Thirteen Journals Say “Often Not”. Critical Finance Review. 10.
- B. D. McCullough, Kerry Anne McGeary and Teresa D. Harrison (2008). Do Economics Journal Archives Promote Replicable Research? The Canadian Journal of Economics / Revue canadienne d'Economique. 41 (4) 1406-1420.
- William G. Dewald, Jerry G. Thursby and Richard G. Anderson (1986). The American Economic Review. 76(4). 587-603.
- Peer Reviewers' Openness Initiative
- The CONSORT Statement
- equator network
- Uri Simonsohn (2013). Just Post It — The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), pp. 1875-1888.
- Aaron Mobley, Suzanne K. Linder, Russell Braeuer, Lee M. Ellis, Leonard Zwelling (2013). A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic. PLoS ONE, 8(5): e63221.
- C. Glenn Begley (2013). Reproducibility: Six red flags for suspect work. Nature, 497, 433-434.
- David L Vaux, Fiona Fidler, Geoff Cumming (2012). Replicates and repeats—what is the difference and is it significant? A brief discussion of statistics and experimental design. Science & Society, 13, 291-296.
- Florian Prinz, Thomas Schlange, Khusru Asadullah (2011). Believe it or not: how much can we rely on published data on potential drug targets? Nature, Nature Reviews Drug Discovery 10, 712.
- Jelte M. Wicherts, Denny Borsboom, Judith Kats, Dylan Molenaar (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726-728.
- Chris Drummond (2017). Reproducible research: a minority opinion. Journal of Experimental & Theoretical Artificial Intelligence, 30, 1-11.