The imperative of expanding the traditional MRM function

Learn how implementing an MLOps solution can help manage the risk and complexity associated with ML models proliferation.

Download the full report

Financial institutions and non-bank financial technology companies (FinTechs) alike make extensive use of various machine learning models (MLOps).

Banks, for example, rely on such models for a range of risk assessments, including predictive underwriting, credit risk management, suspicious and/or fraudulent activity management, fair lending compliance, derivative and financial instrument pricing and valuation, securitization risks associated with trading and financial reporting. Developed in Python, R, MatLab or Excel, these powerful models are broadly leveraged by business users to support complex business needs.

Managing an increasingly complex model environment creates challenges for the modelling, risk and compliance teams for senior management, as well as for auditors.

Regulatory considerations

Banking institutions have a regulatory framework that provides supervision and guidance. A model risk management (MRM) governance function must be implemented to help prevent making model-based decisions with damaging consequences for the business, should models prove to be inaccurate, flawed, or misused. If not properly managed in production, these models could trigger adverse commercial, operational, reputational, or regulatory outcomes.

Frameworks such as SR 11-7, SS3/8 are in place to ensure transparency and auditability of such models. The FDIC’s requirements for institutions with over $1bn in assets are the following:

  • Implement a disciplined and knowledgeable model development process that is well documented and conceptually sound
  • Set up controls to ensure proper implementation
  • Implement processes to ensure correct and appropriate use
  • Implement effective validation processes
  • Ensure strong governance, policies, and controls

In SR 11-7, the Federal Reserve and the Office of the Comptroller of the Currency (OCC), broadly define “model” and “model risk” as a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.

These regulatory requirements generate costs for institutions incrementally to those associated with the models’ development and implementation. Indeed, one way to avoid the “uncontrolled” proliferation of “pet spreadsheets and models” throughout the organization is to monitor the applications in a centralized fashion.

The objective is not to impose specific models but rather to independently review and assess their build robustness, the value they provide and their evolution over time.

Focusing on applicable and upcoming regulations and monitoring model applications centrally can help financial institutions and FinTechs manage an increasingly complex MRM landscape. The imperative of expanding the traditional MRM function

The changing shape of model risk management (MRM)

Ensuring visibility and relevance is especially critical in an environment where hundreds – if not thousands – of models must coexist and be used across the organization. Traditionally, model risk management has been deployed to help validate the reliability, consistency, and robustness of models used by financial institutions.

The shift to external data

Today, the nature of the data consumed by these models is changing: the mix between internal and external data is rapidly shifting towards more external than ever before. Reliance on these models and consumption of the outputs and insights is increasingly exclusive to business units, which positions them outside the scope of the traditional corporate IT function. In other words, the lack of joint oversight from corporate IT on BU-specific models constitutes a risk exposure that must be addressed.

As model portfolios proliferate, so does the complexity of these models. Modelling teams have begun to incorporate Machine Learning (ML) tools and algorithms, for instance, to add predictive capabilities. This shift from easily replicable (for validation purposes) to more opaque, less explainable ML models creates new challenges for financial institutions and regulatory authorities. In a resource and cost-constrained environment, this factor contributes to a worsening model backlog and pipeline processing issue.

The need for adequate governance and oversight

Specifically for the growing share of ML models versus traditional models, the need for adequate governance and oversight becomes more pressing. Input data needs to be checked for quality and relevance, models’ workflow needs to be managed, users need to set the level of automation required to feed data into the models, run and train them, and expose the outputs.

To keep risk under control, model owners must be clearly designated to guarantee the organization’s compliance with an end-to-end governance process.

Benefits of an expanded and improved MRM function

Benefits from implementing an expanded and improved MRM function include providing a central repository for all models, tools, and other engines and the ability to validate and calibrate each step of the model lifecycle. This includes input data to model development, assess robustness, evolution and changes, as well as the reliability of outputs.

Further benefits of an expanded and improved MRM function are defining consistent model documentation standards and flexibly controlling accesses and roles. In addition, successfully leveraging technology, such as automation, advanced analytics and machine learning, can improve performance and cost-effectiveness and reduce complexity.

Finally, as a minimum, an effective MRM function improves the assessment of key ML model risk factors such as data relevance and reliability, model explainability and transparency. Significantly, it improves data privacy and security.

Compared to “traditional” models, such as those built-in Excel, ML models introduce new risks that need to be addressed in a specific way. Managing machine learning life cycles at scale is, therefore, more challenging.

Firstly, ML models’ performance depends mostly upon the data they are fed and trained with. Data drift can occur because of a change in the data collection phase or the context of the model evolving, feeding the model with potentially new unseen data, eventually degrading the model performance.

Even though the ML life cycle involves people from the business, data science, and IT teams, none of these groups uses the same tools or even, in many cases, share the same fundamental skills to serve as a baseline for communication.

Plus, while data scientists are specialized in data analysis and model building, different skills and tools are needed to deploy and maintain models in production. The complexity quickly becomes overwhelming when factoring in the data teams’ staff turnover because data scientists end up managing models they did not create.

Faced with the challenge of managing future complex model environments, financial institutions need to implement MLOps to limit their risk exposure vis-à-vis their business and regulators alike. The MLOps solution blueprint recommended by Mazars provide a path towards achieving these objectives.

Read our full solution report.

Download now

The information provided here is for general guidance only, and does not constitute the provision of tax advice, accounting services, investment advice, legal advice, or professional consulting of any kind. The information provided herein should not be used as a substitute for consultation with professional tax, accounting, legal or other competent advisers.

Learn more

Document

The imperative of expanding the traditional MRM function