FORTEX: A Formal Framework for Optimizing the Explainability-Efficiency Trade-off in High-Stakes AI

Author(s): Mohan Siva Krishna Konakanchi

Publication #: 2512016

Date of Publication: 06.12.2020

Country: United States

Pages: 1-5

Published In: Volume 6 Issue 6 December-2020

DOI: https://doi.org/10.62970/IJIRCT.v6.i6.2512016

Abstract

The deployment of Artificial Intelligence (AI) in high-stakes domains such as healthcare, finance, and autonomous systems is contingent on two often-conflicting requirements: the model must be highly efficient for real-time decision-making, yet its reasoning must be transparent and explainable for verification, trust, and regulatory compliance. The trade-off between model efficiency and explainability is typically managed in an ad-hoc, qualitative manner, lacking a formal basis for optimization. This paper introduces FORTEX (Formal Optimization of Resilient and Trusted Explainability), a novel framework that formalizes and quantifies this critical trade-off. FORTEX proposes con- crete, computable metrics for both algorithmic explainability (X )—based on model complexity and decomposability—and com- putational efficiency (E )—based on latency, memory, and FLOPs. Using these metrics, we frame the selection of an optimal model as a multi-objective optimization problem and present a method to generate the Pareto-optimal frontier, enabling stakeholders to make informed, data-driven decisions. To address the challenge of training models on sensitive, distributed data common in these domains, we integrate FORTEX with a Trust-Metric- based Federated Learning (TMFL) protocol. TMFL secures the collaborative training process by dynamically evaluating and weighting contributions from participating silos, ensuring the integrity of the final model. We validate our framework on benchmark high-stakes datasets, demonstrating its ability to systematically map the explainability-efficiency landscape and to produce robust, trustworthy models in a decentralized setting.

Keywords: Explainable AI (XAI), Model Efficiency, Multi- Objective Optimization, Federated Learning, Trust Metrics, High-Stakes AI.

Download/View Paper's PDF

Download/View Count: 24

Share this Article