Abstract

References (18)

Using the URL or DOI link below will
ensure access to this page indefinitely

Based on your IP address, your paper is being delivered by:

New York, USA

Processing request.

Illinois, USA

Processing request.

Brussels, Belgium

Processing request.

Seoul, Korea

Processing request.

California, USA

Processing request.

If you have any problems downloading this paper,please click on another Download Location above, or view our FAQFile name: SSRN-id1856503. ; Size: 397K

You will receive a perfect bound, 8.5 x 11 inch, black and white printed copy of this PDF document with a glossy color cover. Currently shipping to U.S. addresses only. Your order will ship within 3 business days. For more details, view our FAQ.

Quantity:Total Price = $9.99 plus shipping (U.S. Only)

If you have any problems with this purchase, please contact us for assistance by email: Support@SSRN.com or by phone: 877-SSRNHelp (877 777 6435) in the United States, or +1 585 442 8170 outside of the United States. We are open Monday through Friday between the hours of 8:30AM and 6:00PM, United States Eastern.

In this paper we address the question of learning in a two-sided matching mechanism that utilizes the deferred acceptance algorithm. We consider a repeated matching game where at each period agents observe their match and have the opportunity to revise their strategy (i.e., the preference list they will submit to the mechanism). We focus in this paper on better-reply dynamics. To this end, we first provide a characterization of better-replies and a comprehensive description of the dominance relation between strategies. Better-replies are shown to have a simple structure and can be decomposed into four types of changes. We then present a simple better-reply dynamics with myopic and boundedly rational agents and identify conditions that ensure that limit outcomes are outcome equivalent to the outcome obtained when agents play their dominant strategies. Better-reply dynamics may not converge, but if they do converge, then the limit strategy profiles constitute a subset of the Nash equilibria of the stage game.