The authors investigate the influence of case selection and (re)coding for two vintages of a key resource for research on economic sanctions: the Peterson Institute data base reported in Hufbauer et al. (Economic Sanctions Reconsidered, 2nd edition in 1990 and 3rd edition in 2007). The Peterson Institute has not reported transparently on these changes. At the level of individual case studies the authors uncover a tendency to inflate success scores, reclassifying failures into successes even when the evidence for doing so was not convincing. At the level of the aggregated case studies and general methodology they uncover positive bias (that is: methodological changes that make it more likely to find sanction success as indicated by a higher success score, either on average or in individual cases): splitting of episodes into cases and the changed definition of sanction contribution increase the success ratio in general and ultimately the share of sanctions that are judged to be a success. The authors also show the importance of the reclassification of ‘destabilization cases’ into ‘regime changes’. Their probit analysis shows that the 3rd edition’s methodology underestimates the contribution of certain sanction characteristics, including the positive impact of the costs of sanctions to the sender, duration of the sanctions and the sender’s companion policies.