AI can cut emissions and congestion in urban transport by nearly 50%
Traffic congestion, fuel consumption, and emissions also offer quantifiable performance indicators, making mobility uniquely suited for rigorous testing of AI effectiveness. The study notes that improvements in traffic flow have direct spillover effects on energy demand, emergency response times, air quality, and industrial logistics, amplifying the societal value of successful interventions.
A new academic study shows that deep reinforcement learning, when applied to traffic signal control, can sharply reduce waiting times, improve traffic flow, and lower carbon emissions, while remaining compatible with privacy and real-world deployment constraints.
The study, titled Deep Reinforcement Learning for Sustainable Urban Mobility: A Bibliometric and Empirical Review, and published in Sensors, combines one of the largest bibliometric reviews ever conducted on artificial intelligence in smart cities with rigorous computational testing of AI-driven traffic systems.
Urban mobility emerges as the most mature AI domain
The analysis maps global research trends, dominant technologies, institutional leadership, and thematic clusters within AI-driven smart city research. Seven major urban domains emerge from the data: mobility, energy, safety, healthcare, smart living, pollution management, and industry.
Among these, urban mobility stands out decisively. The authors find that transportation systems show the highest research maturity, the strongest alignment with sustainability goals, and the clearest pathways for empirical validation. Unlike many smart city applications that depend on fragmented data or long-term social adoption, mobility systems generate continuous, high-quality data and allow real-time experimentation through simulation and live deployment.
Traffic congestion, fuel consumption, and emissions also offer quantifiable performance indicators, making mobility uniquely suited for rigorous testing of AI effectiveness. The study notes that improvements in traffic flow have direct spillover effects on energy demand, emergency response times, air quality, and industrial logistics, amplifying the societal value of successful interventions.
To bridge the persistent gap between academic research and operational deployment, the authors introduce a Computational Integration Framework. This framework uses bibliometric evidence to guide domain selection, align AI techniques with application needs, and define sustainability-focused performance metrics. Instead of proposing new algorithms, the framework integrates established AI methods into a decision-support structure that can inform real-world planning and investment.
Deep reinforcement learning delivers measurable traffic and climate gains
To validate the framework, the research team conducts an extensive computational experiment using deep reinforcement learning to optimize traffic signal control. The model is implemented in a realistic urban traffic simulator, representing a multi-intersection city grid with stochastic vehicle arrivals and real-world driving dynamics.
The AI system is trained using a reward structure that balances efficiency and sustainability. It penalizes excessive waiting time, congestion, fuel use, and carbon emissions while rewarding smooth traffic throughput. This design reflects a growing shift away from purely efficiency-driven optimization toward climate-aware urban control systems.
The results are striking. Compared with fixed-time traffic signals, the deep reinforcement learning controller reduces average vehicle waiting time by approximately 48 percent. Traffic throughput increases by more than 30 percent, indicating that roads are used more efficiently without expanding infrastructure. Fuel consumption falls by over 30 percent, and carbon dioxide emissions drop by roughly 27 percent.
These gains outperform traditional adaptive methods such as max-pressure control and hybrid deep learning systems that rely on static rules. The AI agent continuously learns from traffic conditions, reallocating green phases dynamically to prevent queue buildup and minimize stop-and-go driving, a major contributor to urban emissions.
Crucially, the study does not treat privacy as an afterthought. A federated version of the reinforcement learning model is also tested, allowing intersections to train locally without sharing raw traffic data. This decentralized approach preserves nearly 96 percent of the performance achieved by the centralized system while maintaining strong data protection, addressing one of the most persistent barriers to AI deployment in public infrastructure.
The findings suggest that sustainability-focused AI control does not require sacrificing performance or scalability. Instead, carefully designed learning systems can deliver environmental benefits alongside operational efficiency.
- READ MORE ON:
- artificial intelligence smart cities
- deep reinforcement learning traffic control
- sustainable urban mobility
- AI traffic signal optimization
- smart city mobility systems
- traffic congestion reduction AI
- urban sustainability AI
- federated learning smart cities
- AI carbon emission reduction
- intelligent transportation systems
- FIRST PUBLISHED IN:
- Devdiscourse

