Video description
Financial services are increasingly deploying AI models and services for a wide range of applications in the credit lifecycle, such as credit onboarding and identifying transaction fraud and identity fraud. These models must be interpretable, explainable, and resilient to adversarial attacks. In some situations, regulatory requirements apply that prohibit black-box machine learning models.
Jari Koister (FICO) shares forward-looking tools and infrastructure has developed to support these needs.
Topics include:
- Examples of financial services applications of AI and ML
- Specific needs for explainability and resiliency
- Approaches for solving explainability and resiliency
- Regulatory requirements and how to meet them
- A platform that provide support for xAI and mission-critical AI
- Further research and product development directions
This session was recorded at the 2019 O'Reilly Strata Data Conference in San Francisco.
Table of Contents
Interpretable and Resilient AI for Financial Services - Jari Koister (FICO)