A study published in the Annals of Oncology found that a deep learning algorithm achieved a 95% accuracy rate in detecting melanoma from skin lesion images, outperforming a panel of 58 dermatologists whose average accuracy was 86.6%. In another study published in Nature, a deep learning system was able to identify breast cancer from mammograms with greater accuracy than radiologists.
The AI system demonstrated a reduction of 1.2% (UK set) and 5.7% (US set) in false positives, and 2.7% (UK set) and 9.4% (US set) in false negatives. Research in the journal JAMA Network Open demonstrated that an AI algorithm was able to diagnose lymph node metastasis in breast cancer patients with an accuracy of 99.5%. This surpassed the 96.9% accuracy demonstrated by human pathologists. Research conducted by Siemens in 2019 demonstrated that AI-driven predictive maintenance tools could forecast equipment failures with up to 30% greater accuracy than experienced maintenance personnel. And according to a 2021 study by J.P. Morgan, AI and machine learning models reduced default prediction errors by approximately 25% over traditional statistical models.
And yet, every time me or one of my team members do a webinar on using AI for risk management, the only question that people ask is “how accurate is AI”. Every bloody time.
So let me share a story. In my last 5 Head of Risk roles, I had access to both world-class team of quant risk professionals and access to different AI models, including the ones built in-house. And you know what, I have spent considerably more time verifying, checking, correcting and validating my human risk team’s deliverables than I do now verifying RAW@AI deliverables. What used to take my team weeks to do, now can be done by AI + python in hours.
AI doesn’t have to be always right, it just has to be less wrong than humans
In my mind, for AI to be universally adopted by risk professionals, it doesn’t need to be perfect—it just needs to be better than humans at making fewer mistakes. This is something Douglas Hubbard calls “beat the bear” fallacy. Imagine two campers confronted by a bear; one doesn’t have to outrun the bear to survive, he just needs to outrun the other camper. Similarly, AI doesn’t have to be flawless; it just needs to outperform human error rates and speed of analysis.
Humans are great at many things, but we can get tired, we can overlook details, we have blind spots to certain risks and we all have our biases. Some risk managers came from accounting background and have little understanding of risk math. All these limitations make risk managers less effective, especially when dealing with probability theory, complex, interrelated risks and decisions. AI, on the other hand, can handle huge datasets, large volumes of text and complex calculations without getting weary or overly biased. AI still makes mistakes. That isn’t the question. Does it make less or more mistakes than an alternative, that is the right question.
My RAW@AI, for example, can consistently outperform most Big 4 risk consultants and RM1 risk managers. Try it.
The more data you have, the more AI outperforms humans
Large volumes of data is what gives AI its risk management superpower. Unlike humans, AI can quickly go through huge amounts of both structured data (risk registers, spreadsheets and databases) and unstructured data (risk reports, interview transcriptions, annual reports, and research papers). This ability lets AI gather a wide and current view of potential risks and quantify most risks on the planet. Most risk managers can of course do the same, but it will take them 10X time to achieve comparable level of quality.
The human brain is incredibly adept at recognizing familiar patterns, but it struggles with the sheer complexity and subtlety of patterns found in today’s probabilistic risk landscape. AI, on the other hand, excels at finding complex and non-linear relationships withing massive datasets (distilling large texts into key points, not so much, but it’s only a question of time). This can reveal hidden connections between seemingly disparate events or data points, highlighting risks that would otherwise go unnoticed until it’s too late. According to a 2022 report by IBM, AI systems detected and responded to security breaches an average of 40% quicker than human-led teams.
You no longer need math PhD to do quant risk analysis
In the past, every time I joined a company I would struggle to find quants who understand risk management and capable of abstract thinking to integrate into decision making. If you ever tried hiring a quant for risk management, you know what I mean.
Well, AI is changing the game. AI models with access to Python environment are putting powerful quantitative tools into the hands of a wider range of professionals. AI models take care of the complex math, allowing risk managers to focus on empowering risk taking and integrating risk analysis into the decision making. Just like calculators made complex computations accessible to everyone, AI and SIPmath are doing the same for risk modelling. You don’t need to understand the inner workings of a calculator to get the answer, and you no longer need to be a mathematics whiz to perform sophisticated risk analysis.
You still need to be able to double check the calculations, because calculation errors are frequent. But you know what is even more frequent? Calculation errors by human risk managers. Much more frequent :)) The question isn’t whether AI will transform risk management. It’s whether you will upskill quickly enough to utilize AI and guide its insights, or your team will be replaced by the next version of RAW@AI.
Learn how to start using AI models in your risk department at #RAW2024.
Douglas Hubbard made popular another term “algorithm aversion”. It describes the phenomenon where people prefer human judgment over algorithmic or machine-generated solutions, even when the algorithm performs better or equally well. This aversion often persists even after the person has experienced the algorithm’s superior performance, typically due to biases or a lack of trust in automated systems. Looks at just some of the studies on algorithm aversion, it’s not new:
- Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err.” Published in the Journal of Experimental Psychology: General, this study is one of the foundational pieces of research on algorithm aversion. It demonstrated that people are less likely to use an algorithm after seeing it perform imperfectly, despite the fact that the algorithm outperforms humans on average.
- Logg, J. M., Minson, J. A., & Moore, D. A. (2019). “Algorithm Appreciation: People Prefer Algorithmic to Human Judgment.” Published in Organizational Behavior and Human Decision Processes, this study provided a counterpoint to the typical findings of algorithm aversion, suggesting that under certain conditions, people might prefer or appreciate algorithmic advice over human advice.
- Onkal, D., Goodwin, P., Thomson, M., Gönül, S., & Pollock, A. (2009). “The Relative Influence of Advice from Human Experts and Statistical Methods on Forecast Adjustments.” This study in the Journal of Behavioral Decision Making explored how professionals adjust their forecasts based on advice from statistical methods compared to human experts, highlighting a bias towards human advice even when statistical methods are known to be more accurate.
- Prahl, A., & van Swol, L. M. (2017). “Understanding Algorithm Aversion: When Is Advice From Automation Discounted?” This article in the Journal of Forecasting delves into conditions under which individuals may or may not follow automated advice, identifying factors that can influence the acceptance of algorithmic input.
- Burton, J. W., Stein, M-K., & Jensen, T. B. (2020). “A Systematic Review of Algorithm Aversion in Augmented Decision Making.” This review, published in the Journal of Behavioral Decision Making, consolidates various studies on algorithm aversion, providing a comprehensive overview of how and when algorithm aversion occurs in decision-making processes involving automation.
Important limitations:
- Utilizing AI in risk management involves handling sensitive data, which can raise compliance and privacy issues. Some risks are too sensitive to be analysed by AI, unless it is an in-house closed model.
- Using AI for risk management will probably be considered high risk activity under the EU AI Act and will require significant compliance controls.
- In cases where AI-driven decisions lead to financial losses or compliance breaches, establishing accountability can be challenging. Determining whether the fault lies in the data, model, or decision-making process requires clear protocols.
- Effective use of AI in risk management requires specialized skills that may not be readily available within traditional risk teams. At least hiring or upskilling personnel to work effectively with AI tools is easier than finding a good risk quant who understands decision science and behavioural economics.