Artificial Intelligence and Machine Learning

AI-based systems substitute human decisions with data-driven decisions, which can reduce subjectivity and error when processing large volumes of complex information.

Cornerstone Research teams are skilled in applying these techniques across a range of litigation and investigations. We utilize AI and ML to drive automation of increasingly complex tasks and unlock new approaches for analysis, including using both supervised and unsupervised learning.

  • Applied machine learning approaches to healthcare risk adjustment models, which explained approximately twice as much variation in claims data as the status quo linear regression model.
  • Designed and trained a neural network model to classify 140,000 documents based on appearance where reliable optical character recognition (OCR) was unachievable.
  • Enhanced record linkage algorithms with semi-supervised machine learning techniques to facilitate complex fuzzy matching tasks.
  • Utilized various machine learning methods to organize and prioritize review of thousands of documents and conduct many other text analyses.

Learn about our text analytics capabilities >

GPU Computing at Cornerstone Research

As an industry leader in machine learning applications in litigation contexts, Cornerstone Research invested in on-premises graphics processing unit (GPU) infrastructure to support analysis. These computing resources provide mathematical computation speeds that exceed those of even the fastest central processing unit (CPU) clusters, all while keeping sensitive data within the secure confines of Cornerstone Research’s state-of-the-art data centers. As a result, Cornerstone Research’s Data Science Center can support extremely robust and defensible analyses for our clients.

Black Box Reputation

ML models have proven to be very powerful solely based on their ability to make predictions. For this reason, popular applications of ML typically de-emphasize statistical inference and causality assertions. Much ML research has therefore focused on improving prediction accuracy, algorithm efficiency, and incorporating new data sources. The process of improving prediction accuracy tends to result in complex models (e.g., ensembling).

This has led to some ML models having a “black box” reputation, where it is difficult for a user to know how the model arrived at its output. This can be a potential issue if such a model is being used to support expert testimony.

ML interpretability is an active area of research that is making progress addressing these black box issues. A model and algorithm that are both predictive and robust can be leveraged to gain understanding of the associations between input and response. These techniques include model-specific measures like variable importance for tree-based methods and model-agnostic measures such as SHAP (Shapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations).