TY - GEN
T1 - Rationality authority for provable rational behavior
AU - Dolev, Shlomi
AU - Panagopoulou, Panagiota N.
AU - Rabie, Mikaël
AU - Schiller, Elad M.
AU - Spirakis, Paul G.
PY - 2011/7/7
Y1 - 2011/7/7
N2 - Players in a game are assumed to be totally rational and absolutely smart. However, in reality all players may act in non-rational ways and may fail to understand and find their best actions. In particular, participants in social interactions, such as lotteries and auctions, cannot be expected to always find by themselves the "best-reply" to any situation. Indeed, agents may consult with others about the possible outcome of their actions. It is then up to the counselee to assure the rationality of the consultant's advice. We present a distributed computer system infrastructure, named rationality authority, that allows safe consultation among (possibly biased) parties. The parties' advices are adapted only after verifying their feasibility and optimality by standard formal proof checkers. The rationality authority design considers computational constraints, as well as privacy and security issues, such as verification methods that do not reveal private preferences. Some of the techniques resembles zero-knowledge proofs. A non-cooperative game is presented by the game inventor along with its (possibly intractable) equilibrium. The game inventor advises playing by this equilibrium and offers a checkable proof for the equilibrium feasibility and optimality. Standard verification procedures, provided by trusted (according to their reputation) verification procedures, are used to verify the proof. Thus, the proposed rationality authority infrastructure facilitates the applications of game theory in several important real-life scenarios by the use of computing systems.
AB - Players in a game are assumed to be totally rational and absolutely smart. However, in reality all players may act in non-rational ways and may fail to understand and find their best actions. In particular, participants in social interactions, such as lotteries and auctions, cannot be expected to always find by themselves the "best-reply" to any situation. Indeed, agents may consult with others about the possible outcome of their actions. It is then up to the counselee to assure the rationality of the consultant's advice. We present a distributed computer system infrastructure, named rationality authority, that allows safe consultation among (possibly biased) parties. The parties' advices are adapted only after verifying their feasibility and optimality by standard formal proof checkers. The rationality authority design considers computational constraints, as well as privacy and security issues, such as verification methods that do not reveal private preferences. Some of the techniques resembles zero-knowledge proofs. A non-cooperative game is presented by the game inventor along with its (possibly intractable) equilibrium. The game inventor advises playing by this equilibrium and offers a checkable proof for the equilibrium feasibility and optimality. Standard verification procedures, provided by trusted (according to their reputation) verification procedures, are used to verify the proof. Thus, the proposed rationality authority infrastructure facilitates the applications of game theory in several important real-life scenarios by the use of computing systems.
KW - game authority
KW - game theory
KW - privacy
KW - rationality authority
UR - http://www.scopus.com/inward/record.url?scp=79959905586&partnerID=8YFLogxK
U2 - 10.1145/1993806.1993858
DO - 10.1145/1993806.1993858
M3 - Conference contribution
AN - SCOPUS:79959905586
SN - 9781450307192
T3 - Proceedings of the Annual ACM Symposium on Principles of Distributed Computing
SP - 289
EP - 290
BT - PODC'11 - Proceedings of the 2011 ACM Symposium Principles of Distributed Computing
T2 - 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, PODC'11, Held as Part of the 5th Federated Computing Research Conference, FCRC
Y2 - 6 June 2011 through 8 June 2011
ER -