case3

AI Model Assurance – objective validation, independent risk assessment, and compliance testing

Lorykeet provides the enabling technology and tools to enable efficient, reliable, and independent AI assurance assessments and audits. AI assurance basically involves technical processes for testing the behavior of algorithms. The goal here is to help model assurance practitioners conduct their assessments reliably, more efficiently, and effectively.

AI assurance services can provide testing, checking and verification needed or desired to evaluate the trustworthiness and/or performance of an AI system.  An assessment or audit can be done based on laws, rules, regulations, policies, public commitments, or other principles, thus enabling trust in the development and use of an AI system.  The scope of an AI assurance project or program can vary depending on the context and objectives.  Lorykeet worked with a big four firm in the context of machine learning model assurance.  Lorykeet’s was platform was used to determine of model results could be confidently relied upon, whether the outcomes and predictions were reasonable within the context of outside parameters, and whether any unwanted biases had been introduced through data drift or the model training.

Lorykeet believes that AI assurance services will grow in importance as AI expands into more and more aspects of business, government, and our individual lives.  Many countries are developing and promulgating new compliance requirements to help ensure fairness, protect privacy, and to try to protect society from unethical behavior or misuse of AI.  In goes without saying that issuing new regulations about how AI systems should or must be used will be followed by some form of enforcement.  It will be very important to have capabilities to determine if rules and regulations are being followed.  Enterprises using AI systems and stakeholders affected by their use will want to know what the algorithms are really doing.  

Assurance will be increasingly important, not only for addressing compliance, but also for assessing other risks where compliance alone does not provide sufficient information to know that a system is trustworthy.

Lorykeet has been designed and developed to enable collaboration, and to provide a common platform to meet the needs of, data scientists, MLOPS, business users, risk managers, and auditors.  We believe that AI assurance will bring together these different stakeholders in a way that will improve the governance and value of AI deployments.  You might think about Lorykeet as AI for governing AI, and machine learning for managing machine learning models.  

  • Avoid “Black Box” hard-to-explain outcomes and predictions
  • Overcome passive & active resistance by business users
  • Provide validation, assurance and auditability
  • Enable “what-if” analytic and counterfactual capabilities
  • Improved human-AI interface for all phases of AI lifecycle

Comments are closed.