Discussion Paper
No. 2013-42 | August 21, 2013
Chris R. Geller, Jamie Mustard and Ranya Shahwan
Focused Power: Experiments, the Shapley-Shubik Power Index, and Focal Points

Abstract

Experiments evaluate the fit of human behaviour to the Shapley-Shubik power index (SSPI), a formula of voter power. Groups of six subjects with differing votes divide a fixed purse by majority rule in online chat rooms. Earnings proxy for measured power. Chat rooms and processes for selecting subjects reduce or eliminate extraneous forces. Logrolling remains as the primary political force. Subjects’ initial proposals for division of the purse allow measurement of effects from focal points and transaction costs. Divisions of purses, net of those effects, closely fit the SSPI, averaging 1.033 of their SSPI values. The SSPI can serve as a control for power imbedded in voting blocs, permitting fuller analysis of other factors that affect political outcomes.

Data Set

JEL Classification:

D71, D72, D74

Cite As

Chris R. Geller, Jamie Mustard, and Ranya Shahwan (2013). Focused Power: Experiments, the Shapley-Shubik Power Index, and Focal Points. Economics Discussion Papers, No 2013-42, Kiel Institute for the World Economy. http://www.economics-ejournal.org/economics/discussionpapers/2013-42


Comments and Questions



Anonymous - Invited Reader Report
September 13, 2013 - 08:39
Broadly speaking two different venues of voting power and voting power studies exist: One is focusing on the mathematical properties and other qualities of the so-called power indices which are the tools of the trade. This venue is interested in the indices as mathematical constructions and studies various properties of the indices. The other venue is empirically oriented and applies one or more power indices and tries to assess a specific research question, such as what would be a justified vote distribution in the EU Council. The current manuscript belongs to the latter venue, this time via empirical cake division experimenting. The article applies the Shapley-Shubik power index, which is basically the Shapley value used in the context of simple games. The authors create a test setting by creating a weighted voting game and consequently asking the participants to divide a fixed purse. This is carried out in a chat room environment in a test lab. The participants have to come up with a plan to divide the cake. There is a simple majority requirement. The trick is that the game is weighted, i.e. the participants have varying amounts of votes. For this, the authors have designed certain vote profiles with focal points. The experiments (or games or cake divisions) are repeated a number of times. The results show that the fixed purse seems to be divided very closely according to the way the Shapley-Shubik index would suggest. This is a very important and interesting result. Now, this is a very interesting paper. In my opinion, the motivation of the paper is well over the average regarding voting power studies. There is a clear research question and setting present. Also, the results are interesting. It is a well-written paper. Basically, there is just one possible issue that comes to mind: It is well known that in many cases the power indices produce very similar results. In some cases, the results are equal. How about now? I wonder how the results of the present paper would look if Banzhaf-Penrose or some other power index was applied instead of the Shapley-Shubik. This investigation might be worth a footnote or so. The Shapley value is behind the Shapley-Shubik index, however this does not apply to other power indices. If, for example, the fit of the Banzhaf index would be very good as well, what would that tell us?

Chris Geller - Reply to Invited Reader Report Sept 13, 2013
September 19, 2013 - 13:59 | Author's CV, Homepage
Thank you very much for your positive and thoughtful review. If you and other readers find this response satisfactory, we will be happy to add it into the paper’s conclusion or as a footnote. This paper focuses on the Shapley Value normalized as the SSPI. We do not attempt to evaluate our empirical results relative to other power indices for three central reasons. First, addressing an axiomatically derived power index allowed us to design an experiment tailored to those axioms. Second, the Shapley value has particularly widespread applications, making it a good beginning. Third, we felt that addressing this one index involved enough complications in design and analysis that we should keep comparative analysis separate. That said, our search of vote profiles did not reveal profiles with 1) measurable differences contrasting the SSPI with the Banzhaf (1965)-Penrose (1946) power index “BPPI” and 2) unambiguous compliance with the Shapley axioms. Only two of the power-identical profile sets in this paper yield any difference between SSPI and BPPI. For the r set, the largest player has 36.67 percent of the power with SSPI, contrasting with 36.21% with the BPPI. For w1, SSPI yields 40.00% and PBPI yields 39.29. Readers who wish to apply these results to the BPPI and accept this experiment’s design as appropriate for BPPI, may interpret these results as supporting the BPPI equally as to the SSPI. Those who accept this experiment as relevant to other indices will find our results contrasting strongly with indices which identify substantially more power with large players, e.g. Johnston (1978), or substantially less power with large players, e.g. Deegan and Packel (1978). Thank you,Chris Geller

Anonymous - Invited reader's report
September 23, 2013 - 17:10
This is a well written and very interesting paper, with a thoughtfully designed and novel experiment. I do have some reservations about the authors’ conclusions on the empirical relevance of the Shapley-Shubik power index (SSPI). In particular, before being compared to the SSPI, observed divisions are re-scaled according to how often the player is included in the initial proposals. After this re-scaling, the divisions closely fit the SSPI, but should this re-scaling be performed at all? How closely do the “raw” data (i.e., average observed final divisions) approximate the SSPI? From table 4 at the end of the paper it seems that some of these observations are quite far from the SSPI. Also, observed divisions are averaged over several sets of “vote profiles” that should be equivalent according to the SSPI. In table 4 we can see that average earnings can differ quite a lot across theoretically equivalent vote profiles. If the average divisions are very different for different vote profiles, is the SSPI really that relevant? After all, the SSPI predicts no effect of changing the “nominal” votes. In this context, I miss an analysis of how well the nominal votes explain the data compared to the SSPI and perhaps a mention of the previous experimental literature that looks at the nominal votes, in particular Frechette et al. (2005) “Gamson’s law versus noncooperative bargaining theory” Games and Economic Behavior and Diermeier and Morton (2005) "Proportionality versus Perfectness: Experiments in Majoritarian Bargaining." In Social Choice and Strategic Decisions: Essays in the Honor of Jeffrey S. Banks, edited by David Austen-Smith and John Duggan. There are also some other experimental papers motivated by power indices where the divisions of purses are compared to the SSPI. Montero et al. (2008) “Enlargement and the Balance of Power: An Experimental Study” Social Choice and Welfare and Esposito et al. (2010) "An experimental study on learning about voting powers" (working paper, available at http://halshs.archives-ouvertes.fr/docs/00/50/18/40/PDF/DT2010-18.pdf). The experiment is novel in that the experimental design explicitly tries to create favorable conditions for Shapley’s axioms, and this is thoughtfully done. Efficiency is favored by excluding competitive subjects (who potentially would be willing to waste money) and by holding an auction in which subjects wrote a bid of what they would accept not to participate in the experiment (to select the subjects most motivated to earn money). The price of this is that excluding competitive subjects may affect the external validity of the experiment, and holding an auction may itself create focal points or aspirations levels. Did the authors observe an effect of the bids on later behavior? I found some of the paper’s statements in page 7 very confusing. In particular, veto power does not in itself violate Shapley’s axiom of efficiency! The SSPI can be calculated for any game regardless of veto power. Indeed, a lot of Shapley’s analysis is based on so-called “unanimity games” in which all players in a set S have veto power. Though I feel less strongly about this than about the previous comment, I also suspect that the violation of additivity described in p. 7 is a consequence of “buying a ticket from a raffle” being very different from “playing a characteristic function game”, and as such outside the scope of power indices. Finally, a minor comment: table 1 has two horizontal lines missing.

Chris Geller - Reader's report Sep 23
October 03, 2013 - 15:22 | Author's CV, Homepage
Reply to Anonymous - Invited reader's report September 23, 2013 We thank the Invited Reader for considering our paper so carefully. We share the Invited Reader’s concerns, many of which underlie motivations for our research, and appreciate the opportunity to improve our paper. We particularly thank the Invited Reader for identifying errors of content and noting shortcomings in our literature review, especially Montero Sefton and Zhang 2008. We will update the paper with the following corrections, additions, and some of these clarifications if the Invited Reader and others find them acceptable. A number of experiments in bargaining theory incorporate elements of various models in the structure of their experiments (e.g. Fréchette, Kagel, Morelli 2005 Nominal bargaining power… JPE 2005; Frechette, Kagel and Morelli Gamson’s Law… “GEB 2005; Diermeier and Morton 2005). Diermeier and Morton 2005 perform a computerized experiment to evaluate the goodness of fit between a game theoretic idea and experimental outcomes. They compare empirical results to multiple theoretical constructions. Some of these investigations of bargaining theory create experimental environments tailored to particular aspects of bargaining models. Baron-Ferejohn models represent bargaining as a series of proposals from players selected at random then approved or rejected by players with a majority of votes. These sequences are directly implemented in bargaining experiments. Gamson’s Law predicts what coalitions will form and how goods are divided between parties within a coalition. The focus of our research differs from those bargaining theory experiments in that the latter test how well various theories or models predict human behavior in reasonably natural environments. These bargaining theories make contentions about observable results including formation of coalitions and divisions of gains within coalitions. Frechette, Kagel and Morelli (Gameson’s Law versus Non-Cooperative Bargaining Theory Games and Economic Behavior 2005) test Gamson’s Law in the institutional framework of demand bargaining, testing the implication that Gamson’s Law holds in all institutional settings. In contrast, our experiments evaluate how closely human divisions of gains may approach SSPI results, given conditions approximating the axioms of the Shapley Value. Our research endeavors to evaluate the applicability of the SSPI to account for the power embedded in blocks of votes, in order to isolate effects of votes in themselves from other aspects of human interaction. The bargaining theory experiments endeavor to evaluate how well such theories account for human behavior including those other aspects of interaction. Bargaining theory does not assume symmetry. The SSPI and Value are not bargaining theory in the sense of Gamson’s Law, demand bargaining and the Baron-Ferejohn model. The SSPI and Value only address whether or not power will remain constant with changes in nominal vote shares when symmetry is maintained. I find Esposito et al. (2010) interesting, especially in light of Diermeier and Morton (2005) and personal experience with experiment-based teaching. However the authors noted their paper as “Very Preliminary”, so I feel that public comments are inappropriate. The SSPI, as a normalization of the Shapley Value, gives the value of participating in a game given certain conditions. Those conditions do not include the presence of focal points. Focal points are products of perceptions of vote blocks rather than of the size vote blocks per se. If the SSPI coincided with voting outcomes in the presence focal points, the meaning of that co-incidence would be unclear. For example it might indicate that the Value was more general than its axioms imply; perhaps it would indicate that focal points do not affect outcomes. Providing an example of raw results is appropriate at least in these discussions. Since the p profiles have the most observations, we computed the mean of earnings of the largest player across the p profiles: $4.92, 98.4% of the SSPI across all rounds; and for the rounds with all players experienced $4.98, 99.7% of the SSPI. The SSPI predicts that changing the “nominal” vote profile will have no effect on power only if the change does not violate symmetry by affecting matters that can influence voting outcomes -- beyond the power in the vote blocks themselves. Perceptions of vote blocks affect voters beyond the power of vote per se and so violate symmetry. Focal points are ways that such perceptions can affect outcomes. The Invited Reviewer is correct; excluding competitive subjects may affect the external validity of the experiment. The “external” in this case refers to more natural environments. We would be interested in the effects of competitive subjects on vote outcomes. We are interested in the effects of many natural variables on vote outcomes. That interest is why we involved ourselves in the matter of power embedded in vote-blocks per se. With confidence in the SSPI, we can introduce heterogeneous subjects (such as differing psycho-social orientations) to investigate the effects of the conditions that vary and perhaps of heterogeneity itself. Our data is not suited to investigating the effects of the offer-bids we gathered to select subjects for withdrawing from the experiment (when too many students arrived at a particular session). We excluded much of the relevant variation in order to achieve effectively homogenous subjects. Perhaps such difference would be interesting to the objectives and structure of further experiments. The Invited Reviewer is correct, veto power does not in itself violate Shapley’s axiom of efficiency. Veto power that can, and sometimes does, result in lower payoffs in total to all players violates efficiency. Thank you; the correction improves transparency of the example. I will correct the lines in Table 1. Thank you very much,Chris

Anonymous - Referee Report 1
September 30, 2013 - 08:58
see attached file

Chris Geller - Reply to Referee 1
October 09, 2013 - 09:10
We thank Referee 1 for any time and effort spent in reading and reviewing our paper. Although our paper has been rejected, we feel that replies may be of use to some readers. Cheers, Chris Geller Oct 9, 2013

Anonymous - Referee Report 2
September 30, 2013 - 08:59
see attached file

Chris Geller - Reply to Referee 2
October 09, 2013 - 09:09
We thank Referee 2 for the time, care, and effort spent in reading and reviewing our paper. Although our paper has been rejected, we feel that replies may be of use to some readers. Cheers, Chris Geller Oct 9, 2013

Jang Woo Park - Invited Reader Comment
October 02, 2013 - 08:35
see attached file

Chris Geller - Replies to Readers and Reviewers
October 04, 2013 - 19:53
Some issues of theory apply directly or indirectly to observations from more than one reviewer or reader. Also, some apply to multiple aspects of our methods. We would like to discuss issues of efficiency, symmetry, and additivity in comment threads separate from replies to individual reviewers and readers. If we have not responded to such issues in a reply to a reader or reviewer, we are not neglecting them, but rather paying more attention. Thank you,Chris

Chris Geller - Preceding note outdated
October 09, 2013 - 09:14
There is no longer need to clarify theoretical issues. Cheers, Chris

Anonymous - Co-editor's decision
October 07, 2013 - 11:46
After reading the paper and all the posted comments, I agree with the two referees and Dr. Park that the paper, even though well motivated, fails to be convincing enough and cannot be accepted at this stage.The two referees focus mostly on theoretical inconsistencies, which I agree with, but the most important problem, highlighted both by the second referee and by Dr. Park, is in the exclusion methodology for the experiment. All the reports contain useful suggestions and literature comparisons and sources, so I think that the theoretical part and presentation part could be fixed in a revision, but the experimental methodology problems make me at the end prefer the rejection option.