Learning from the experience of others - Simultaneous search and coordination in R&D and diffusion processes.
Nield D. Grosse and Oliver KirchkampIn this paper we are studying a multiple player two-armed bandit model with two risky arms in discrete time. Players have to find the superior arm and can learn from others’ history of choices and successes. In equilibrium, there is no conflict between individual and social rationality. If agents depart from perfect rationality and use count heuristics, they can benefit from coordination (or centraliza- tion) of search activities. We test the conjecture that agents gain from coordination with a between-subject design in two treatments. In the experiments we find no gains from coordination. Instead, we find less severe deviations from the equilibrium strategy in the non-coordinated treatment.
Keywords: Two-armed bandit, parallel search, coordination, experiment.
Here is the most recent version as of February 2009.