Energy-efficient AI can deliver real-time waste sorting


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-01-2026 19:32 IST | Created: 24-01-2026 19:32 IST
Energy-efficient AI can deliver real-time waste sorting
Representative Image. Credit: ChatGPT

Global waste generation is rising at a pace that is outstripping the capacity of existing recycling systems, placing growing pressure on municipalities, budgets, and the environment. Artificial intelligence (AI) has long been promoted as a solution, but the carbon cost of large AI models has raised new concerns about whether digital tools designed to protect the environment may be undermining it.

A new study titled DWaste: Greener AI for Waste Sorting Using Mobile and Edge Devices evaluates how computer vision models can be redesigned and deployed to balance accuracy, speed, and environmental impact, shifting waste sorting AI away from energy-intensive cloud systems toward low-power mobile and edge devices.

Why greener AI is becoming essential for waste management

Global waste volumes are projected to grow dramatically by mid-century, while recycling rates in many advanced economies have stagnated. A major contributor to low recycling efficiency is contamination caused by improper sorting, which raises processing costs and limits material recovery. Traditional approaches rely heavily on manual labor or centralized industrial machinery, both of which face scalability and cost constraints.

AI offers the ability to automate waste classification and detection, but most high-performing models require powerful hardware, continuous connectivity, and energy-intensive computation. These requirements conflict directly with sustainability goals. The study frames this contradiction as a core challenge for modern AI development: improving environmental outcomes without increasing the carbon footprint of the technology itself.

To address this, the research adopts a “Greener AI” perspective, which evaluates models not only on predictive accuracy but also on inference speed, memory consumption, model size, and carbon emissions. Rather than assuming that the most accurate model is the best solution, the study argues that real-world waste management demands systems that are efficient, resilient, and deployable at scale in constrained environments.

The research focuses specifically on mobile phones and edge devices, such as small embedded systems installed near waste bins or sorting facilities. These environments require offline capability, low latency, and minimal power consumption. Cloud-dependent AI systems may offer high accuracy, but they introduce delays, privacy concerns, and ongoing energy costs that undermine their suitability for decentralized waste sorting.

Benchmarking accuracy against energy and carbon cost

The analysis focuses on benchmarking widely used deep learning models across image classification and object detection applications. The research compares heavyweight classification architectures such as EfficientNetV2, ResNet50, and ResNet101 with lightweight alternatives like MobileNet, alongside object detection models including YOLOv8n and YOLOv11n.

The results reveal a clear and consistent trade-off. Large classification models deliver very high accuracy, often exceeding 95 percent, but at the cost of large model sizes, higher latency, and substantially greater energy consumption. These models require more memory, take longer to process images, and generate higher carbon emissions during both training and deployment. While suitable for laboratory benchmarks or cloud-based analysis, they are poorly matched to real-time, on-device waste sorting.

On the other hand, lightweight object detection models demonstrate a different balance. Although their accuracy is lower than that of the largest classifiers, their performance remains strong enough for practical waste sorting tasks. More importantly, they achieve ultra-fast inference speeds, small model sizes, and dramatically reduced energy consumption. In long-term deployment scenarios, where inference is performed continuously, these differences translate into major sustainability gains.

The study introduces a modified object detection model that enhances accuracy without sacrificing efficiency. By integrating a lightweight attention mechanism into a compact detection architecture, the research improves precision and recall while keeping computational overhead low. This design choice reflects the broader argument of the paper: targeted architectural improvements can yield meaningful performance gains without resorting to brute-force scaling.

The study also demonstrates the importance of model quantization, a technique that reduces numerical precision to shrink model size and memory usage. Quantization significantly lowers VRAM requirements and speeds up inference, enabling deployment on low-end hardware. In some cases, model size is reduced by more than two-thirds, making previously impractical models viable for edge deployment.

Carbon emissions are treated as a first-class metric throughout the analysis. By measuring emissions across data preparation, training, and inference, the study shows that while training remains energy-intensive for all deep learning models, inference emissions dominate the environmental impact in real-world use. Lightweight detection models produce near-negligible carbon emissions per prediction, making them far more sustainable when deployed at scale.

From laboratory models to real-world deployment

The optimized detection model is integrated into both a mobile application and a dedicated edge device, demonstrating that real-time waste detection can operate reliably without cloud connectivity. This capability is critical for deployment in public spaces, developing regions, and infrastructure-constrained environments.

The research argues that object detection is inherently better suited to waste sorting than pure classification. In real-world settings, waste items are often partially obscured, mixed, or poorly positioned. Detection models that localize and classify objects simultaneously are more robust under these conditions than models that assume clean, centered images.

The findings also highlight a broader systems perspective. Waste sorting AI cannot be evaluated in isolation from hardware, energy supply, or operational context. A slightly less accurate model that runs efficiently on a smartphone may deliver far greater environmental benefit than a highly accurate model that requires constant cloud computation. Sustainability, in this sense, becomes an optimization problem across the entire lifecycle of the system.

The study also brings to the fore the limitations. Dataset diversity remains a challenge, as lighting conditions, camera quality, and waste appearance vary widely in real-world environments. The carbon accounting focuses on operational emissions and does not fully capture the embodied emissions of hardware production. However, the author argues that these limitations reinforce, rather than undermine, the case for lightweight, adaptable models that can be updated and redeployed with minimal cost.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback