Human-AI Pair Programming: Evaluating Trust, Efficiency, and Defect Incidence
Author(s): Arjun Deshraje Urs
Publication #: 2507032
Date of Publication: 27.07.2025
Country: United States
Pages: 1-5
Published In: Volume 11 Issue 4 July-2025
DOI: https://doi.org/10.5281/zenodo.16501130
Abstract
The advent of human-AI pair programming, characterized by collaborative interactions between developers and intelligent code assistants such as GitHub Copilot and Amazon CodeWhisperer, represents a pivotal shift in software engineering practices. This study presents an empirical investigation into the influence of these AI copilots on trust calibration, task efficiency, error proliferation, and the onboarding of software engineers. Utilizing a mixed-methods approach with 72 participants across three experience levels, we conducted a within-subjects experiment comparing solo programming with AI-assisted development. While AI copilots enhance code generation velocity by 19.7% on average, they concurrently increase defect rates by 71% among novice developers (p < 0.001). These insights illuminate pathways toward the design of more reliable, interpretable AI assistants and highlight the need for experience-dependent training approaches.
Keywords: AI copilots, collaborative programming, trust calibration, developer efficiency, defect rates, software onboarding
Download/View Count: 218
Share this Article