Energy-Efficient Deep Learning: A Thermodynamic Perspective on Gradient Descent with Trusted Federated Explainability for Integrity, Accountability, and Trade-off Control
Author(s): Mohan Siva Krishna Konakanchi
Publication #: 2603036
Date of Publication: 16.04.2020
Country: United States
Pages: 1-7
Published In: Volume 6 Issue 2 April-2020
DOI: https://doi.org/10.62970/IJIRCT.v6.i2.2603036
Abstract
Deep learning has delivered large performance gains across domains, but training and operating modern models consumes substantial energy and associated carbon emissions. While the community has explored systems-level and algorithmic efficiency, there remains a gap in how practitioners reason about optimization energy usage in a principled, operationally actionable way. This paper proposes a thermodynamic perspective on gradient descent that treats optimization as a controlled dissipative process: training converts compute work into model improvement while unavoidably dissipating energy through noisy updates, variance, and repeated processing of data. We use this perspective to motivate practical energy-efficiency interventions—such as temperature-like noise control, dissipation-aware step sizing, and “free-energy” style early stopping criteria—without introducing complex formulas. In addition, production deep learning is increasingly dis- tributed across organizational silos (teams, regions, vendors) and computational boundaries (edge, on-prem, cloud). Cross- silo collaboration is constrained by privacy, policy, and proprietary data. We therefore propose ThermoTrust-FL, a trust metric-based federated learning framework that ensures integrity and accountability while sharing energy-efficient optimization improvements across silos. ThermoTrust-FL introduces: (i) a trust metric that quantifies participant integrity using provenance attestations, update consistency, evaluation reliability, and policy compliance; (ii) trust-aware robust aggregation that reduces poisoning risk while preserving cross-silo privacy; and (iii) a controller that explicitly quantifies and optimizes the trade-off between explainability and performance, enabling energy-aware governance decisions that remain auditable.
We evaluate the approach using a controlled prototype simulation of heterogeneous clients training deep models under non-IID data, variable compute budgets, and adversarial/faulty participants. Results show that thermodynamics-inspired controls can reduce energy proxy cost while maintaining accuracy, and that trust-aware federated aggregation improves robustness and stabilizes energy-efficiency gains under integrity failures. Moderate explanation budgets achieve stable, actionable explanations with limited performance loss. We conclude with deployment guidance for energy-efficient, trusted learning at scale.
Keywords: energy-efficient deep learning, gradient descent, thermodynamic perspective, stochastic optimization, federated learning, trust metrics, explainable AI, integrity, accountability
Download/View Count: 23
Share this Article