Machine-learning (ML) models — compared to strictly additive models — can provide notable predictive lift when data presents complex relationships. However, without an understanding of the relationships captured by the ML model, we risk encoding accidental, unintentional and even undesirable features into these predictions. These surprising relationships may be introduced by unexpected biases in our data- collection methods, or by confounding treatments in our historical practices, which, if undetected, could yield models that are unfit for their intended tasks. On the bright side, however, revelations from an ML model’s content can inspire greater insights for the model creators. They may also foster greater trust among its users. This paper seeks to explore, illustrate and compare Explainable Artificial Intelligence (xAI) techniques that can help us gain deeper insights from ML models and operationalize them with far greater confidence. Specifically, we outline some of the explainability support for machine learning provided by toolsets available from FICO.
Thank you for verifying your email address!
To complete your account setup, please create a password for your account
Please wait while we redirect you to your profile page or click here.
Congrats! We will now take you to your account.
Login to your account to save & follow content. Keep up with the latest across vendors, technologies, and content types.
Don't have an account yet? Create one to log in.
Welcome to ITUpdate!
By signing up, you can curate content with your selection of vendors, technology classification, and content type. With our Save, Follow, and Share features, you can keep yourself always updated on the latest in the world of IT.
To share please enter your Email address
To Add Following, please enter Email address
To Save Content please enter Email address