With AI being rapidly adopted in almost any application imaginable, from spam filtering to self-driving cars to medical radiology, the discussion on how to control it is intensifying. Some even worry about AI becoming too powerful, with humans eventually losing control. We don’t believe that’s a realistic fear, at least not in the foreseeable future.
We do believe, however, that results from the use of AI should be explainable. This is called glass-box AI (as opposed to black-box AI), or Explainable AI (XAI).
The reality is, however, as all modern AI is based on “Machine Learning”, it is in fact a black box.
In this whitepaper, we explain how that can be done in relation to AI being used to enhance legal due diligence during M&A.
We want to avoid overly technical language when answering questions in this paper. Therefore, we will use a real-life analogy to explain how it works, and how we can ensure you are in control, so you can trust the results.