top of page

Reinforcement Mobility Planning

The term Reinforcement Mobility Planning (RMP) has broad and narrow meanings. Broadly speaking, RMP refers to the study of correctly identifying the impact of historical planning practices for better future mobility planning and effectively learning from learning from them for better planning decisions in the future. Narrowly speaking, RMP refers to using artificial intelligence (especially reinforcement learning) for optimizing mobility plans. 

The harmonious integration of existing and emerging mobility services, technologies, and infrastructures has the potential to resolve some of the most critical issues facing society today. However, if not done properly, integration can sustain or even worsen existing problems related to accessibility, equity, public health, environment, and disaster resilience. A key problem from the perspective of transportation planning is how to optimally allocate limited resources to different aspects of the mobiltiy systems in the context of overall urban dynamics. Since urban systems are highly complex and the impact of policy decisions can be hard and costly to predict, smarter planning methods are needed to help stakeholders better understand the potential outcomes of their decisions. 

I approach the integration challenge in urban transportation planning through two mutually enhancing tracks. The first track develops and applies urban mobility models that take into account the ongoing development of automation technologies, electrification, and coordinated mobility services to allow for more collaborative multi-stakeholder decision-making. The second track focuses on developing and applying AI (especially reinforcement learning, RL) algorithms on the models developed in the first track for making better investment and policy decisions under uncertainty. Together, the two tracks provide a holistic ecosystem for developing a sustainable and smarter planning approach that is theoretically sound and practically feasible.  

bottom of page