Four Principles of Explainable Artificial Intelligence
- Source: nist.gov
Treliant Takeaway:
Treliant knows model governance. If you need assistance developing compliant adverse action notices based on artificial intelligence or machine learning credit models, Treliant can help.
Article Highlights:
There are several properties that are critical to building trust in AI systems, including explainability, resiliency, reliability, accountability, and lack of bias. In August 2020, the National Institute of Standards and Technology (NIST) delved into the characteristic of explainability by proposing Four Principles of Explainable Artificial Intelligence (Explainability Principles) that comprise the fundamental properties of explainable AI systems across the multidisciplinary AI universe. NIST presented five categories of explanation and noted that different uses and users of AI will require different types of explanation. Explainable AI is critical because of the high-stakes decision processes utilizing AI, as well as laws and regulations requiring provision of information related to the logic underlying those decisions.[1] Lack of explainability may decrease trust in AI systems and slow societal adoption of AI. The NIST Explainability Principles are intended to support policy decisions, guide future research in AI, and increase societal acceptance, rather than define algorithmic methods. The NIST Explainability Principles are:
- Explanation – AI systems should deliver accompanying evidence or reasons for all outputs;
- Meaningful – The explanations provided by systems should be understandable by individual users;
- Explanation Accuracy – The explanations provided by a system should reflect the system’s process for generating the output; and
- Knowledge Limits – An AI system should only operate under conditions within its design limits or when the system has sufficient confidence in its output.
If adopted, these principles will have significant impact within the realm of financial services. This is especially true when considering adverse action notices under the Equal Credit Opportunity Act. Specifically, explainable credit decisions made by AI systems would have to provide accurate and meaningful information to applicants regarding how the reason for the decision relates to creditworthiness to ensure the explanations are both accurate and meaningful to consumers seeking to improve their future likelihood of receiving credit.
[1] For example, the Equal Credit Opportunity Act (ECOA), the Fair Credit Reporting Act (FCRA), and their implementing regulations.