Subscription

Thanks for submitting the form.
Introduction to Deep Learning
AI Adoption changes the way of working for all industries. Various approaches can be used to implement AI. One of these approaches is Deep Learning. Its algorithms can give highly accurate outputs. But still, the adoption in the Finance industry is limited. The primary reason behind this is not algorithms’ performance. No doubt, it provides us with more precise and accurate results than other ML models. But the reason is its nature; Its model having black-box functioning means the system is not answerable for its decisions. Due to the opaque nature of its algorithms, users feel a hindrance to adopting them.
What are the Challenges of Deep Learning?
Especially in credit risk assessment, where it is required for a financial institution to give the reason when they decline a credit or loan application according to the “Equal Credit Opportunity Act” and “Fair Credit Reporting Act.” But if they use the model here, due to the model’s opaque nature, financial institutions will not give the reason behind the decision as the model doesn’t explain. Also, not able to detect bias and fairness.
In neural networks, some techniques can explain the model using feature importance; these are LIME (Local Interpretable Model-Agnostic Explanations), Deep Lift, Integrated Gradient, etc.
Though these techniques provide explanations, the finance industry is still not using them in credit assessment because it is still difficult to answer some questions. The following two questions are shared that resist financial organizations from using the Deep learning model in some of their systems. These are:
- For Trust: Do these methods provide an accurate or interpretable explanation?
- Reliable: How consistent is a system to produce trustworthy explanations?
What are the Solutions for Deep Learning?
For these questions and to increase user’s trust and satisfaction, it will discuss the approaches that can answer the questions discussed. It will check the trustworthiness and reliability of these approaches to let lenders adopt Deep Learning.
So users can apply these approaches for checking the trust and reliability of neural networks.
Trustworthiness of Deep Learning
It will check the trustworthiness of the approaches used to provide interpretability. An ML model will be used to predict the credit risk. First, it will calculate the Global feature importance using its weights. And then find the local feature importance using LIME, integrated gradient, and deep lift, etc.
In the above case, Deep Lift and Integrated Gradients give similar feature importance as Global Explanation is giving. Deep lift and Integrated Gradients have four similar features. Yet LIME has only one similar feature. This result is extracted only for one local observation. Therefore, Integrated Gradients and Deep Lift have more chances to accept than LIME.
Reliability in Deep Learning
Reliability allows checking the consistency of methods to provide a trustworthy result. To check the reliability, it will check the baseline. In our use case, there is no baseline; therefore, we will take a reference point for justification. How reliable an approach is to select a random baseline. The more an approach can respond to the random observations, the more reliable it is.
Conclusion
Interpretability allows us to make a fair and trustable system. Lime, Deep Lift, and integrated gradient are three approaches we discussed to explain the neural network’s decisions. We have discussed the methods to compare and check the trustworthiness and reliability of the approaches. The best way to select an approach is to first work on properties that must always be in the system. SHAP (SHapley Additive exPlanations), one of the Explainable AI libraries, has the multi-framework implementation of deep-lift and Integrated Gradients.
- Read more about the Ethics of Artificial Intelligence
- Explore the Challenges and Solutions in AI Adoption