A visionary business analyst and product owner with 18 years of proven track record in driving industry-transforming financial solutions in the UK, Olubunmi Martins-Afolabi possesses exceptional leadership in regulatory compliance, risk modeling, and technology implementation. Her work has delivered nationwide impact by modernizing financial processes, strengthening UK banking resilience, and ensuring compliance with evolving global regulations.
Given her extensive experience in credit risk model implementations for major UK and global financial institutions, Martins-Afolabi is at the forefront of deploying Explainable AI (XAI) in wholesale credit risk assessment.
Over nearly two decades, she has delivered Probability of Default (PD) models, Loss Given Default (LGD) calculators, and other quantitative solutions critical to maintaining financial stability and meeting stringent regulatory standards like EU prudential regulations and IFRS 9. Martins-Afolabi's track record in risk modeling and regulatory compliance positions her to navigate the dual requirement of model accuracy (to protect capital adequacy) and explainability (to satisfy internal governance and external regulators).
Her collaborative work with business, risk, and IT stakeholders ensures advanced AI models are both robust and transparent, enabling senior management and regulators to trust outcomes for strategic decision-making.
By leveraging her Agile product development expertise, Martins-Afolabi structures AI-driven credit risk projects in a way that fosters iterative feedback and continuous improvement. Incorporating explainability from initial design through deployment assures that model decisions can be traced back to specific risk drivers, satisfying compliance teams and bolstering confidence in AI-based credit evaluations.
Her success in probabilistic and stress-testing models across institutions like NatWest, JP Morgan, and HSBC provides a unique vantage point for embedding AI that clarifies risk drivers.
Martins-Afolabi focuses on the design and implementation of advanced wholesale credit risk financial models and regulatory reporting frameworks, with a strong emphasis on improving risk management strategies and ensuring compliance with evolving regulations. Her work primarily centers around the development of Probability of Default (PD) and Loss Given Default (LGD) models, which play a crucial role in enabling financial institutions to optimize product pricing strategies, enhance profitability, and strengthen their regulatory adherence.
Martins-Afolabi has also led significant projects related to Basel II, Basel III, IFRS 9, and Banking Package (CRR II/CRD V), directly contributing to capital adequacy and improving financial stability across global banking institutions.
Martins-Afolabi's focus extends to transforming financial risk assessments through automation, data visualization, and the implementation of scalable solutions for regulatory compliance. By pioneering innovative approaches, she has contributed to the evolution of financial risk management practices, including the migration of LGD models to cloud platforms and the development of reporting solutions that enhance transparency and reduce compliance risks.
Her efforts in leading regulatory reporting initiatives, particularly in CRD V implementations at institutions like JP Morgan Chase and Macquarie Bank, have ensured financial institutions meet stringent regulatory standards while improving overall risk governance. As machine learning techniques become mainstream in predicting credit defaults, Martins-Afolabi's pioneering role in ensuring these tools remain interpretable enhances not only accuracy but also the credibility of wholesale credit decisions in boardrooms and regulatory audits alike.
Foundations in Regulatory Scrutiny and the Drive for Transparency
The imperative for explainable AI in wholesale credit risk assessment is deeply rooted in the stringent demands of regulatory frameworks and the need for clear audit trails. Early experiences working under regimes like Basel II/III and CRD V highlighted the critical importance of not just model accuracy but also methodological transparency.
Martins-Afolabi notes, "My early work in credit risk modelling, particularly under regulatory frameworks such as Basel II/III, CRD V (Capital Requirements Directive V) and SA-CCR (Standardized Approach to Counterparty Credit Risk), helped to shape my appreciation for transparency and auditability in model formulation and implementation." This foundational understanding underscores that regulatory compliance is inextricably linked to the ability to explain how risk metrics are derived.
The expectation for clarity extends beyond regulatory bodies to internal stakeholders who rely on model outputs for critical decisions. "During the implementation of Basel II/III frameworks, I observed that regulators and internal stakeholders demanded not just accurate risk metrics (PD, LGD, RWA), but also transparent methodologies to justify outcomes, especially for complex counterparties," Martins-Afolabi explains.
This dual demand necessitates models that are not only powerful but also interpretable. The high-stakes nature of wholesale credit, involving complex and material exposures, further amplifies this need. Failing to provide clear explanations can lead to severe consequences, including regulatory penalties, diminished stakeholder trust, and flawed pricing decisions that ultimately impact client relationships and the institution's bottom line.
Balancing Accuracy and Interpretability
Designing AI-driven risk solutions within the regulated wholesale credit environment presents a complex balancing act between achieving high model accuracy and maintaining transparency. Martins-Afolabi identifies several core challenges inherent in this process.
One primary difficulty lies in the trade-off between the sophistication of AI models and their interpretability. She states, "AI models can often offer more accuracy compared to techniques like Merton Distance-to-Default. However, this increase in accuracy offered by AI models typically comes at the cost of easy interpretability."
This creates a significant hurdle, as complex models often function as "black boxes," making it difficult to trace the influence of individual inputs on the final output, a critical requirement for regulators and internal validation teams. Regulatory bodies like the European Banking Authority (EBA) and the Bank of England compound this challenge by setting high expectations for model transparency and justification.
"A risk model is unlikely to be approved by regulators if the model's decision-making process cannot be clearly explained and justified, especially when capital outcomes are impacted," Martins-Afolabi points out. Beyond regulatory approval, the adoption and effective use of these models by business users hinge on trust and understanding.
If credit officers cannot interpret why a counterparty receives a certain risk score, they may disregard the model, leading to frequent overrides that signal model inadequacy to regulators. Furthermore, the complexity of AI models complicates validation processes, slows down the model lifecycle, and introduces governance risks. The implementation and maintenance also demand significant resources, robust data pipelines, and continuous monitoring to manage potential instability and ensure that model behavior remains explainable and controlled.
Embedding Explainability Throughout the Model Lifecycle
To effectively address the challenges of transparency, explainability cannot be an afterthought; it must be integrated into the entire lifecycle of a credit risk model, such as a PD model. Martins-Afolabi emphasizes this proactive approach: "Working on the implementation of PD models, explainability is a concept that needs to be embedded from the very start, not just at the end."
This begins in the initial design phase, where the model's specific purpose, use case, and scope are clearly defined, for example, specifying a 12-month Through-the-Cycle (TTC) PD model for Large Corporates intended for regulatory capital calculations under Basel III, internal risk processes, and pricing support. Stakeholder engagement is crucial from the outset.
"Working with different stakeholders in workshops and requirements discussions, an understanding of what explainability means to each stakeholder is discussed and understood, e.g., credit officers want to understand what inputs are driving the obligor's PD output while validation teams want to understand the methodologies used in calculating the PDs," Martins-Afolabi elaborates. This collaborative process ensures that the model meets diverse needs.
Translating the model methodology into functional specifications requires a close partnership between quantitative teams and IT developers to ensure clear logic, well-defined input mapping, and parameterized, transparent calculations suitable for validation and auditing.
Finally, at deployment, clear guidance, including breakdowns of key score drivers and model limitations, must be provided to end-users like credit officers, often supplemented by reports that clearly articulate the top contributors to PD outputs in accessible business language, aligning with principles of responsible AI.
Building Stakeholder Trust in AI-Driven Assessments
Establishing trust among diverse stakeholders—business users, risk managers, IT teams, and regulators—is paramount for the successful adoption of AI-generated credit risk assessments. Effective strategies focus on collaboration, clear communication, and demonstrating model reliability.
Drawing from broader experience in model implementation, Martins-Afolabi highlights the importance of early and continuous engagement. As mentioned previously regarding PD model implementation, "Stakeholder engagement is key at this stage. Collaboration between the Quants team, business users, and IT implementation is required regularly."
This foundational collaboration ensures that concerns are addressed proactively and that the model development aligns with user needs and regulatory expectations from the beginning, a practice supported by insights on effective AI implementation.
Transparency and practical demonstration are also vital in converting skepticism into acceptance. Recounting a specific project aimed at enhancing early warning capabilities, Martins-Afolabi shares a successful tactic: "My team ran a series of 'what-if' scenarios with credit and portfolio managers where we stress-tested use cases and walked them through how the model reacted—this helped turn initial skepticism into advocacy."
She further outlines effective strategies, stating that building trust involves several key actions: ensuring early involvement of all stakeholders, translating AI logic into business language, embedding explainability into the solution design, creating and testing 'what-if' scenarios, maintaining transparent governance and documentation, and fostering an efficient feedback loop from credit officers, model validation teams, and the IT implementation team throughout the process.
By implementing these strategies, institutions can build the necessary confidence for stakeholders to rely on AI-driven insights for critical credit decisions.
Meeting Governance Standards without Sacrificing Predictive Power
Navigating regulatory scrutiny requires a meticulous approach to ensure AI models meet stringent governance standards while preserving their essential predictive capabilities. A critical first step involves a thorough interpretation of regulatory requirements.
Martins-Afolabi stresses, "To ensure the AI models we develop meet regulatory requirements, it is important to ensure that the regulatory text has been accurately interpreted and any grey areas have been discussed and clarified." This careful groundwork prevents misalignments and ensures the model is built on a compliant foundation from the start, adhering to frameworks like the IRB frameworks.
The challenge then becomes maintaining the model's predictive strength, often derived from complex algorithms, without falling foul of transparency requirements. Explainability tools become essential in bridging this gap. "To retain predictive power, we use explainability tools to make the inner workings of more complex models easier to understand," Martins-Afolabi explains.
These tools, such as LIME and SHAP, allow teams to demonstrate how specific inputs influence model outputs, providing the necessary justification for model behavior to validators and regulators without unduly simplifying the model to the point where its accuracy is compromised. This balance ensures that the models are not only compliant but also effective in their primary function of assessing risk.
A Case Study: Enhancing Credibility with Explainable AI
Practical applications demonstrate how explainable AI can significantly bolster the credibility of wholesale credit decisions, particularly when dealing with complex or novel modeling approaches. Martins-Afolabi describes a standout project focused on enhancing early warning systems for corporate risk models post-COVID.
The core issue was a disconnect between a regulatory-robust traditional model and the need for greater sensitivity to emerging risks. She elaborates, "The challenge was twofold: our traditional risk model was robust from a regulatory standpoint, but lacked sensitivity to emerging risks and didn't always reflect real-time deterioration." Simultaneously, credit officers harbored skepticism towards "black box" solutions, especially concerning high-value clients.
The solution involved a hybrid model incorporating a machine learning layer trained on diverse data, including macroeconomic indicators and specific triggers like covenant breaches. Crucially, explainability was central to its success.
"Where the project succeeded was in how it handled explainability: The business analysis stream worked closely with Quant and Data Science to define clear business rules around how the AI signal would be used—ensuring it could not override core regulatory PDs, but could flag early warning signals for a proactive review," Martins-Afolabi details.
Running "what-if" scenarios with end-users transformed skepticism into advocacy. The resulting solution demonstrably improved the timeliness of credit decisions, enhanced pricing sensitivity, and was ultimately adopted more broadly, showcasing how embedding explainability can enhance the practical value and trustworthiness of advanced AI models in critical banking functions.
Agile Methodologies for Accountability and Adaptability
In the dynamic environment of AI-driven risk management, characterized by shifting regulations and evolving data landscapes, traditional development approaches often fall short. Agile methodologies, with their emphasis on iterative development and continuous feedback, offer a framework well-suited to maintaining accountability and adaptability.
Martins-Afolabi underscores their importance: "Iterative feedback loops and Agile techniques are critical in AI-driven risk environments because they enable us to quickly and efficiently respond to change." This responsiveness is essential when dealing with complex models operating in a constantly changing context, offering benefits like reduced risks and increased flexibility.
The core strength of Agile lies in its incremental nature and stakeholder involvement. "The Agile approach allows us to build incrementally, test early, and involve stakeholders—like credit officers, validation teams, and IT—from the outset," Martins-Afolabi explains.
This early and frequent engagement ensures that potential issues, such as poorly defined model drivers or misinterpretations of requirements, are identified and addressed rapidly during sprint reviews. This continuous feedback loop fosters accountability by making progress transparent and allows teams to adapt the model quickly to new data, changing market conditions, or updated regulatory expectations, ensuring the final solution remains relevant and effective, aligning with Agile risk management principles.
The Future Landscape: Explainable AI in Global Financial Risk Management
Looking ahead, explainable AI is poised to become an increasingly integral component of global financial risk management. As sophisticated machine learning techniques permeate credit decision-making, capital modeling, early warning systems, and stress testing, the demand for transparency and accountability will only intensify.
Martins-Afolabi foresees this trend clearly: "As machine learning becomes more embedded in credit decision-making, capital models, early warning systems, and stress testing, regulators and stakeholders will continue to demand not just performance, but accountability, transparency, and fairness." This demand stems from the inherent risks associated with opaque models in high-stakes financial applications, as highlighted by the EU AI Act's focus on high-risk systems like creditworthiness assessment.
Regulatory bodies like the EBA, PRA, and APRA are unlikely to approve 'black-box' models for critical functions unless their outputs can be thoroughly justified. Explainability provides the necessary bridge. "That's where explainable AI bridges the gap—it allows us to retain the predictive strength of machine learning while translating model decisions into insights that credit officers, auditors, and regulators can understand and trust," Martins-Afolabi states.
Preparing for this future requires a proactive stance. Martins-Afolabi indicates she has focused on two key areas: embedding explainability into the model lifecycle from initiation through deployment and building cross-functional literacy by helping teams across risk, credit, and IT understand AI concepts and explainability tools. This approach fosters a culture ready to embrace transparent and trustworthy AI.
The integration of AI into wholesale credit risk management presents a paradigm shift, offering unprecedented opportunities for enhanced accuracy and efficiency. Yet, this progress is tempered by the significant challenge of AI opacity—the "black box" problem that obscures decision-making processes. Explainable AI has emerged as an indispensable field, providing the tools and methodologies needed to foster transparency and understanding in these complex systems.
The journey towards truly explainable AI in banking is multifaceted. It involves sophisticated techniques, driven by benefits ranging from improved risk management to enhanced trust, but faces hurdles like technical complexity, data governance, and regulatory pressures.
Successfully navigating this environment demands a holistic approach. It requires combining technical expertise with a deep understanding of credit risk, governance, ethics, and the evolving legal landscape, as exemplified by professionals like Martins-Afolabi. Ultimately, XAI, embedded within strong governance frameworks, is a cornerstone of responsible AI innovation in finance. It enables banks to harness AI's power while maintaining accountability, fairness, and stakeholder trust, paving the way for a financial system where AI is deployed transparently and ethically.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.