Karen Borg, Consulting Director, Reinsurance Contracts at CNA, Teresa Snider, a partner at Porter Wright Morris & Arthur LLP, and John Standish, Co-Founder and Chief Innovation and Compliance Officer at Charlee.ai provided a broad overview of the potential uses of AI in the insurance industry and the pitfalls to avoid when implementing AI solutions into workstreams.
Mr. Standish kicked off the discussion by identifying the key IT technologies, highlighting Natural Language Processing, Computer Vision, and the Internet of Things as technologies with potential application in the insurance arena. He discussed the mechanisms underlying Generative AI, pointing out that insurance regulators require supervised learning for AI systems, meaning that a human must be involved in the training of the AI.
Ms. Snider addressed the primary legal and ethical issues and concerns relating to the use of Generative AI, including the prevalence of hallucinations and the potential that the AI may reach incorrect conclusions where the volume of outdated, incorrect information exceeds the volume of correct information. She reviewed the Model Rules of Professional Conduct and highlighted the potential ethical issues that could arise from the use of AI, such as cross-matter knowledge pooling, the inability of GenAI to “forget” information, and the risks of consumer-based AI tools. Finally, Ms. Snider discussed the ethical obligations of lawyers to use AI competently, to obtain informed consent from clients to use AI and to feed client data into the model, to protect client data, and to exercise independent judgment by ensuring human supervision of such models.
Ms. Borg reviewed the current case law, ethical rules, and ABA opinions governing the use of AI. She highlighted the fact that lawyers who acknowledged and promptly corrected errors arising from their use of AI were subject to lesser sanctions than those who failed to take such remedial actions.
Mr. Standish tallied the number of jurisdictions globally that had either implemented or were in the process of implementing regulations and frameworks relating to the use of AI, including the NAIC. He also identified six pillars of ethical AI: (1) that the work product be factual; (2) that it be accurate; (3) that it be explainable, meaning that the AI could point to documents in the file supporting its output; (4) that it be transparent; (5) that it be articulate; and (6) that it be capable of being tested for accuracy.
Finally, Mr. Standish discussed potential applications of AI in legacy books of business. These included analysis of historical data to identify underpriced or underperforming segments, catastrophe modeling, stress-testing of portfolios, optimizing pricing models, identifying leakage, claims analytics, and fraud detection.
A video replay of this presentation can be accessed in the AIRROC On-Demand Library at https://airroc.memberclicks.net/airroc-on-demand.
