Home Artificial Intelligence Explainable AI Using Expressive Boolean Formulas

Explainable AI Using Expressive Boolean Formulas

6
Explainable AI Using Expressive Boolean Formulas

The explosion in artificial intelligence (AI) and machine learning applications is permeating nearly every industry and slice of life.

But its growth doesn’t come without irony. While AI exists to simplify and/or speed up decision-making or workflows, the methodology for doing so is commonly extremely complex. Indeed, some “black box” machine learning algorithms are so intricate and multifaceted that they’ll defy easy explanation, even by the pc scientists who created them.

That may be quite problematic when certain use cases – equivalent to within the fields of finance and medicine – are defined by industry best practices or government regulations that require transparent explanations into the inner workings of AI solutions. And if these applications will not be expressive enough to satisfy explainability requirements, they could be rendered useless no matter their overall efficacy.

To handle this conundrum, our team on the Fidelity Center for Applied Technology (FCAT) — in collaboration with the Amazon Quantum Solutions Lab — has proposed and implemented an interpretable machine learning model for Explainable AI (XAI) based on expressive Boolean formulas. Such an approach can include any operator that may be applied to 1 or more Boolean variables, thus providing higher expressivity in comparison with more rigid rule-based and tree-based approaches.

You could read the full paper here for comprehensive details on this project.

Our hypothesis was that since models — equivalent to decision trees — can get deep and difficult to interpret, the necessity to search out an expressive rule with low complexity but high accuracy was an intractable optimization problem that needed to be solved. Further, by simplifying the model through this advanced XAI approach, we could achieve additional advantages, equivalent to exposing biases which can be vital within the context of ethical and responsible usage of ML; while also making it easier to keep up and improve the model.

We proposed an approach based on expressive Boolean formulas because they define rules with tunable complexity (or interpretability) based on which input data are being classified. Such a formula can include any operator that may be applied to 1 or more Boolean variables (equivalent to And or AtLeast), thus providing higher expressivity in comparison with more rigid rule-based and tree-based methodologies.

On this problem we now have two competing objectives: maximizing the performance of the algorithm, while minimizing its complexity. Thus, fairly than taking the standard approach of applying considered one of two optimization methods – combining multiple objectives into one or constraining considered one of the objectives – we selected to incorporate each in our formulation. In doing so, and without lack of generality, we mainly use balanced accuracy as our overarching performance metric.

Also, by including operators like AtLeast, we were motivated by the concept of addressing the necessity for highly interpretable checklists, equivalent to an inventory of medical symptoms that signify a specific condition. It’s conceivable that a call could be made through the use of such a checklist of symptoms in a way by which a minimum number would must be present for a positive diagnosis. Similarly, in finance, a bank may resolve whether or not to offer credit to a customer based on the presence of a certain number of things from a bigger list.

We successfully implemented our XAI model, and benchmarked it on some public datasets for credit, customer behavior and medical conditions. We found that our model is mostly competitive with other well-known alternatives. We also found that our XAI model can potentially be powered by special purpose hardware or quantum devices for solving fast Integer Linear Programming (ILP) or Quadratic Unconstrained Binary Optimization (QUBO). The addition of QUBO solvers reduces the variety of iterations – thus resulting in a speedup by fast proposal of non-local moves.

As noted, explainable AI models using Boolean formulas can have many applications in healthcare and in Fidelity’s field of finance (equivalent to credit scoring or to evaluate why some customers could have chosen a product while others didn’t). By creating these interpretable rules, we are able to attain higher levels of insights that may result in future improvements in product development or refinement, in addition to optimizing marketing campaigns.

Based on our findings, we now have determined that Explainable AI using expressive Boolean formulas is each appropriate and desirable for those use cases that mandate further explainability. Plus, as quantum computing continues to develop, we foresee the chance to realize potential speedups through the use of it and other special purpose hardware accelerators.

Future work may center on applying these classifiers to other datasets, introducing latest operators, or applying these concepts to other uses cases.

6 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here