What is XAI - Explainable AI
Explainable AI is a term that refers to artificial intelligence systems that can provide understandable and transparent explanations for their decisions or predictions.
Explainable AI & Machine Learning & Graph Analytics
With the growing adoption of machine learning, explainable models are becoming a hot industry topic. Predictive models based on traditional neural and deep learning networks are known for their ambiguity on how a particular outcome came to be, whereas an explainable model could by showing some key variables.
Decision Tree - A Classic Example
A classic example of an explainable model is a decision tree. The decision tree generally has a form of a special kind of graph.
Graph Interpretation
Similarly, using graph algorithms or graph features as part of an AI model can easily interpret natural meaning in graph relationships, such as, “customer–(purchases)–>product”.
Advantages of XAI
Explainable AI models have many advantages. For example, if a personalized recommendation model has an explanation or evidence for its outcome, users are more likely to trust and prefer the said model over the standard ones. Graph analytics is also suitable for calculating and displaying evidence by explaining with graph visualization when needed. It may show products of other users with similar interests to you, or products that are similar to products that users have previously purchased on a network of purchased items.
Use of Graph in traditional ML
The graph-based analytics and machine learning are useful not only for customers but also for corporate users. Many companies employ large teams of trained analysts to determine whether a transaction is potentially fraudulent. With the help of a fraud detection system (FDS) powered by graph algorithms, the analysts are able to connect multiple data sources with nodes and edges to visually show potential fraudulent transactions. The graph based FDS is far more effective than existing machine learning models that judge fraud based on scores.
Another example is phone scam prevention. There are millions of calls that occur every day, but only a small percentage of these are malicious scams. When implemented appropriately, the graph technology combined with machine learning can quickly explore the relationship between sender, phone number, and recipient, and develop a model that detects which call is fraudulent.
XAI and Graph Analytics
Explainable AI (& ML) combined with graph analytics is also useful for regulators and auditors. Banks require advanced methods to detect possible money laundering. Many banks are leveraging machine learning to increase detection accuracy, but at the same time, they need to be able to show auditors the process that their systems are working effectively. ML models using graph-based features provide the necessary transparency.
Graph processes large data quickly & efficiently
Machine learning is often computationally demanding, and graph-based machine learning is no exception. As the size of the linked data expands exponentially, a fast computation is required to traverse the data. Even a typical graph database may not be able to handle deep link analysis with large amounts of graph data. This is why we need a graph database that can process large amounts of data quickly and efficiently.
In order to explain the results of personalized recommendations and fraud detection, powerful queries in the graph database must traverse connections in the graph data, and calculations such as filtering and aggregation must be performed to support complex data structures.
Deeplink graph analytics powers the ability to learn graph patterns and supports the next generation of machine learning by providing explainable models. When combined with AI and ML, the graph-powered explainable AI will be a force to be reckoned with and benefit businesses for years to come.
Top comments (0)