Advancing the state of the art

“Research - the quest for in-depth knowledge and wisdom;
Most improved things, like AI, can still be improved via ‘research’!!”

We research on 'multi-modal AI learning' and ‘eXplainable AI’ to develop a responsible,
high-precision ‘fraud detection software’ that can re-define AI in cybersecurity.

At Diddle AI, we are conducting research on XAI that advances the state-of-the-art in the field,
applying it in the design phase of our fraud detection AI software, after an extensive research in
multi-modal learning AI and eXplainable AI.

Multi-modal Learning AI

Multimodal AI learning is a new AI paradigm in which multiple intelligence processing algorithms are combined to generate more intelligent insights & achieve higher performances.

Multimodal learning consolidates a series of disconnected, heterogenous data from various sources and data inputs into a single resultant model.

Unlike traditional unimodal learning systems, multimodal systems can carry complementary information about each other, which will only become evident when they are both included in the learning process. Therefore, learning-based methods combine signals from different modalities can generate more robust inference, or even new insights, which would be impossible in a unimodal system....

Multimodal learning presents two primary benefits:
  • Multiple modals observing the same data can make more robust predictions, because detecting changes in it may only be possible when both modalities are present.
  • The fusion of multiple modals can facilitate the capture of complementary information or trends that may not be captured by individual predictions.
Other Benefits of Multi-modal Learning

Modals are, essentially, channels of information. The data from multiple sources are semantically correlated, and sometimes provide complementary information to each other, thus reflecting patterns that aren’t visible when working with individual modalities on their own. Such systems consolidate heterogeneous, disconnected data from various sensors, thus helping produce more robust predictions.

eXplainable AI

At Diddle AI, we are conducting research on XAI that advances the state-of-the-art in the field, applying it in the design phase of our fraud detection AI software, after an extensive research and testing. It will be explainable in nature on its decisions on fraud detection.

Our team thrives to produce "white box" models which provides high accuracy along with Explainability on the decisions taken and provide the following advantage to the customers

  • Trust : Our product helps the user to calibrate their trust, with the level of feature explanations & provided on a given data source. Users have the appropriate level of trust given what the system can and cannot do.
  • Understandability : Our product provides an aesthetic simple interface explaining the decision result – without any need for explaining the complex structure.
  • Comprehensibility : We represent the feature aspects in a human understandable fashion using symbolic descriptions, directly interpretable quantitative and qualitative concepts.