Using a structured narrative review approach, we analysed over 70 peer-reviewed empirical studies, policy documents, and regulatory frameworks that span applications in clinical decision support systems, diagnostics, mental health interventions and personalised medicine. Particular attention is given to the perspectives of diverse stakeholders, including patients, clinicians, data scientists and regulators. We assess fairness using demographic parity and equalised odds and evaluate transparency via explainability metrics and auditability practices. Our findings highlight the persistent issues of demographic bias, lack of stakeholder participation, and regulatory fragmentation. We propose a typology of responsible AI metrics, including data representativeness indices, fairness-accuracy trade-off scores, and human-AI oversight benchmarks, that can guide the ethical evaluation and deployment of AI models. By emphasising intersectionality, contextual equity, and co-designed governance, this study moves beyond generic ethical appeals to concrete implementation strategies. Our contribution offers a practical and interdisciplinary roadmap for aligning AI innovation with patient-centred values, institutional accountability, and evolving EU regulatory standards in the healthcare sector.