Factorization-based models, which gained prominence during the Netflix Challenge (2007), have since demonstrated remarkable efficiency in predicting user ratings. However, these methods often struggle to provide interpretable explanations for their recommendations. In contrast, argumentation-based methods excel in explainability but typically underperform in predictive accuracy. To address this trade-off, we propose a novel framework, Context-Aware Feature-Attribution Through Argumentation (CA-FATA), which combines the predictive power of matrix factorization with the interpretability of argumentation frameworks. Our framework combines matrix factorization with argumentation to improve both predictive accuracy and interpretability. Our method uses argumentation to compute argument strengths, while mapping recommendation steps within an argumentation framework. In our framework, each user-item interaction is modeled using an AF, where item features are represented as arguments, and user ratings for these features determine the arguments' strengths. This structured approach frames feature attribution as a transparent computational process, enhancing interpretability. Moreover, the framework seamlessly incorporates side information, such as user contexts, to further improve predictive performance. Empirical evaluations on real-world datasets demonstrate that CA-FATA significantly improves both predictive accuracy and interpretability, outperforming existing argumentation-based methods while achieving competitive results against state-of-the-art context-free and context-aware models. Additionally, CA-FATA supports explanation templates, interactive explanations, contrastive explanations, and effectively mitigates the cold-start problem through user clustering.