An interpretable feature attribution framework is put forth in this paper for explainable neural networks when used in critical decision-support areas. The framework’s aim is to boost the transparency of the model by analyzing the raw feature activations and the attribution-based relevance scores coming from the decision-path contributions together. A synthetic benchmark dataset was made that had statistically separable class distributions (p < 0.005) to allow for controlled evaluation of feature-level and attribution-level behavior. One-Way ANOVA together with Welch and Fisher statistics showed that there were statistically significant differences existing between the groups regarding all primary features and attribution variables, thus indicating a strong effect separation in the two domains of predicting and explaining. A further group descriptive analysis indicated that there were higher mean activation and attribution weights for the positive decision class, meaning that there was coherent alignment between the model outputs and the attribution signals. Mean-difference plots with 95% confidence intervals gave a visual reinforcement of these findings, for they showed consistent non-overlapping confidence bands across features and attribution metrics. The results, when taken together, support the claim that the proposed explainable neural network framework provides interpretable, statistically-coherent decision signals that can be used in risk-sensitive, safety-critical, and compliance-driven applications. The study provides evidence on how quantitative statistical validation increases the trust, reliability, and auditability of model explanations, thus promoting responsible AI deployment in operational decision-support pipelines.