Reference Hub9
Traffic Congestion Management as a Learning Agent Coordination Problem

Traffic Congestion Management as a Learning Agent Coordination Problem

Kagan Tumer, Zachary T. Welch, Adrian Agogino
Copyright: © 2009 |Pages: 19
ISBN13: 9781605662268|ISBN10: 1605662267|ISBN13 Softcover: 9781616924720|EISBN13: 9781605662275
DOI: 10.4018/978-1-60566-226-8.ch012
Cite Chapter Cite Chapter

MLA

Tumer, Kagan, et al. "Traffic Congestion Management as a Learning Agent Coordination Problem." Multi-Agent Systems for Traffic and Transportation Engineering, edited by Ana Bazzan and Franziska Klügl, IGI Global, 2009, pp. 261-279. https://doi.org/10.4018/978-1-60566-226-8.ch012

APA

Tumer, K., Welch, Z. T., & Agogino, A. (2009). Traffic Congestion Management as a Learning Agent Coordination Problem. In A. Bazzan & F. Klügl (Eds.), Multi-Agent Systems for Traffic and Transportation Engineering (pp. 261-279). IGI Global. https://doi.org/10.4018/978-1-60566-226-8.ch012

Chicago

Tumer, Kagan, Zachary T. Welch, and Adrian Agogino. "Traffic Congestion Management as a Learning Agent Coordination Problem." In Multi-Agent Systems for Traffic and Transportation Engineering, edited by Ana Bazzan and Franziska Klügl, 261-279. Hershey, PA: IGI Global, 2009. https://doi.org/10.4018/978-1-60566-226-8.ch012

Export Reference

Mendeley
Favorite

Abstract

Traffic management problems provide a unique environment to study how multi-agent systems promote desired system level behavior. In particular, they represent a special class of problems where the individual actions of the agents are neither intrinsically “good” nor “bad” for the system. Instead, it is the combinations of actions among agents that lead to desirable or undesirable outcomes. As a consequence, agents need to learn how to coordinate their actions with those of other agents, rather than learn a particular set of “good” actions. In this chapter, the authors focus on problems where there is no communication among the drivers, which puts the burden of coordination on the principled selection of the agent reward functions. They explore the impact of agent reward functions on two types of traffic problems. In the first problem, the authors study how agents learn the best departure times in a daily commuting environment and how following those departure times alleviates congestion. In the second problem, the authors study how agents learn to select desirable lanes to improve traffic flow and minimize delays for all drivers. In both cases, they focus on having an agent select the most suitable action for each driver using reinforcement learning, and explore the impact of different reward functions on system behavior. Their results show that agent rewards that are both aligned with and sensitive to, the system reward lead to significantly better results than purely local or global agent rewards. They conclude this chapter by discussing how changing the way in which the system performance is measured affects the relative performance of these rewards functions, and how agent rewards derived for one setting (timely arrivals) can be modified to meet a new system setting (maximize throughput).

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.