Discussion Paper
No. 2017-72 | September 28, 2017
Randall J. Hannum
A replication plan for “Does social media reduce corruption?” (Information Economics and Policy, 2017)
(Published in The practice of replication)

Abstract

The importance of replicating economic research to improve the validity of findings has been the topic of an ongoing discussion, but there is not a consensus of what that means in practice. This article discusses a rationale for replicating a study and offers a plan of how one might go about approaching a replication of an actual study.

JEL Classification:

B40, D73

Links

Cite As

[Please cite the corresponding journal article] Randall J. Hannum (2017). A replication plan for “Does social media reduce corruption?” (Information Economics and Policy, 2017). Economics Discussion Papers, No 2017-72, Kiel Institute for the World Economy. http://www.economics-ejournal.org/economics/discussionpapers/2017-72


Comments and Questions



Benjamin Wood - Referee comments
October 23, 2017 - 22:03
This paper outlines how to approach replication research from a theoretical perspective and then applies that theory to an empirical case. While the author provides some general details for his approach, I think additional specifics would greatly strengthen the contribution of the paper to the transparency literature. After spending 5+ years running 3ie’s replication programme, I found myself referencing a few of our studies on this topic. I limited myself to 3 personal references. Please feel free to cite other references if you find those more useful. Comments:Page 2: When describing “what is true” I struggle with the concept of one absolute truth. Isn’t there space for grey? Page 2: I would argue in almost all situations no researchers are infallible. I would suggest there is value in replication research that does not necessarily determine the “truth” but provides an alternative researcher’s perspective on the data and analysis. Page 2-3: I would like to see a deeper discussion of how the author defines reliable results, i.e. constructive criticism as briefly described in the current draft. Page 3: How does a replication researcher (or a consumer of replication research) define “duplicate” results. Do the results need to match exactly, down to each decimal point? What if software or user written code differences (for example) result in “minor” differences? Would something like the push button replication protocol that we’re currently working on be helpful in this situation? http://www.3ieimpact.org/media/filer_public/2016/07/13/replication-protocol-pbr.pdf Page 3: Where the author mentions reaching out to the original authors, is there a suggested framework for such a conversation? What incentive do the original authors have to participate in this process? Page 3: I would caution against thinking reliable results need to be exactly identical. Please provide additional details here. Page 3: I would also like to think that positive replication findings can independently verify the original findings and give policymakers more confidence in the original results. I’m not seeing a space for this type of situation in the author’s description of quantifying replication findings. Page 3: The author’s description of the different steps he proposes for a replication study reminds me of our “Quality Evidence for Policymaking: I’ll believe it when I see the Replication” paper. I agree that there should be a clear delineation between reproducing the original results (both using the original data/code and trying to recode the methodology using the original data and the publication) from robustness checks or additional analysis. I would the author describe these processes in a bit more detail. With all things replication related, the devil really is in the details. Page 3: I found the researcher’s motivation for the study selection really lacking. The researcher could have picked a number of different papers to develop a replication plan, why did he choose this one? Page 4: I didn’t understand the “except for religion variables” mentioned around the Religion Data Archive. Aren’t all of the variables being used religion-related variables? Page 4: I found the conversation around the “correct (negative) sign” counterproductive to replication research. I would argue the whole idea of replication research is to attempt to verify the original results, but I wouldn’t call those original results necessarily correct. Page 4: I didn’t understand the end of the second full paragraph, when the researcher discusses “methods used that differ from the choices made in papers they reference.” Aren’t we most interested in differences made by the replication researchers that differ from the original methods/analysis? Page 4: Before assessing arguments, I would claim that the starting point is testing if an independent researcher can run the code on the data and generally reproduce the original results. If not, than discussing the paper’s original argument seems unnecessary to me. Page 4: How would the author suggest replication researchers go about checking assumptions? Which assumptions should be checked in this proposed replication paper? Page 4: While I agree that it is difficult to recode an original publication from scratch, and I can attest to that from my past research, I thought the paper dismissed this concept a bit too quickly. I believe it is the job of the original authors to describe transformations and corrections in their publication, working papers, or supporting materials. While it might be very difficult to obtain identical results, I would hope a reasonable researcher could generally reproduce very similar findings. Page 5: I would like to see some organization to the questions at the top of the page. Our “Which tests not witch hunts: a diagnostic approach for conducting replication research” in this issue provides a systematic approach for these types of replication questions. You might find this paper or others that have attempted to categorize approaches to replication research helpful. Page 5: I didn’t understand the danger being described in the first full paragraph on this page. Page 5: I kept expecting to see policymaking recommendations highlighted somewhere on this page but it never appeared. Is there a reason why? Page 5: I would think any chances to the methods should also ideally be pre-specified, no? Page 5: Who judges “failure” to replicate or “assembled incorrectly”? How should we think about quantifying these types of comments? I’m assuming original authors and replication researchers would have very different thoughts on this regard. Page 5: I found concept of “render[ing] results meaningless” difficult to fathom. Similar with the idea of “results that do not undermine the study.” This language needs to be clarified. Page 5: I believe there is more grey space in replication than exactly reproduces and failed to replicate. Is there a reason the author has taken such a straight-line approach? Page 6: I would like to see a more detailed conclusion. I find value in verifying original results and think more confirmatory replications would actually help change the dynamics around replication research. When discussing replication studies “casting doubt” on the original results, I would like to see answers to the “where”, “how”, and “who judges” types of questions. What if the original authors disagree on if the replication study casts doubts on the original study? I can provide many examples from 3ie’s replication paper series if the author would like to explore this issue at a greater depth.

Randall Hannum - Response to Referee
November 01, 2017 - 17:05
Response to referee

Benjamin Wood - Worms
November 02, 2017 - 18:20
Hi Randall,Thank you for considering my comments. I hope I didn't bring to much of my 3ie perspective to your work. I agree that researchers approach replication studies from different perspectives. Just to quickly reply to your last point, "Worm Wars" was by far the most contention 3ie-funded replication study to date. I don't know how to quantify the policy impact of that replication research but it certainly started a number of conversations. With regards,Ben

Randall Hannum - Response
November 22, 2017 - 03:36
Ben, Thank you for mentioning "Worm Wars." I have started looking at the original paper and the replication, which is adding more depth to my understanding of some of the finer points of replications. Thank you again. All the best, Randy

Anonymous - Referee report 2
November 15, 2017 - 08:28
see attached file

Randall Hannum - Response to referee
November 22, 2017 - 03:26
Response to referee

Brian D. Haig, University of Canterbury, New Zealand - Referee report 3
December 04, 2017 - 09:26
see attached file

Randall Hannum - Response to referee
December 10, 2017 - 15:35
Response to referee

W. Robert Reed - Decision letter
January 20, 2018 - 19:57
see attached file