Actuarial profession sees potential in Master of Laws (LLM) degree
In the rapidly evolving world of Artificial Intelligence (AI), understanding its ethics and responsible use has become a pressing concern for various industries, including insurance. Two key resources for this are UNESCO's Recommendation on the Ethics of Artificial Intelligence and the National Association of Insurance Commissioners Principles on Artificial Intelligence.
Actuaries, with their expertise in risk management and governance, play a pivotal role in ensuring the ethical use of AI, particularly in the context of Large Language Models (LLMs). These models, which come in four basic variants - foundational, instruct, code, and multimodal - are increasingly being considered for deployment in the insurance industry.
The use of LLMs in insurance, however, presents challenges. Data privacy and security, regulation compliance, and ethical standards are among the hurdles that need to be addressed. To delve deeper into these issues, a panel of experts from the Society of Actuaries Research Institute (SOA) recently discussed the use of generative AI in the insurance industry.
Dale Hall, FSA, MAAA, CERA, the managing director of research at the SOA Research Institute, led the discussion. The SOA's AI Research landing page offers a wealth of resources, including a library of reports and the monthly Actuarial Intelligence Bulletin.
In March 2025, the European Insurance and Occupational Pensions Authority (EIOPA) established a Consultative Expert Group on Data Use in Insurance to address opportunities and risks related to data use in the insurance sector, including generative AI models. This group, supported by EU Commission guidelines on AI, appears to be the key institution involved with generative AI development and application for insurance industry purposes at that time.
Deploying an LLM can offer more control, but using an API from major developers like ChatGPT is simpler, faster, cost-effective, and ensures security and privacy. The SOA Research Institute has published a guide on deploying LLMs for actuarial use titled "Operationalizing LLMs: A Guide for Actuaries."
The panel identified several applications of LLMs in insurance, including coding assistance, digital assistance, data summarization and categorization, testing and model validation assistance, translation, research source attribution, and claims integration.
When choosing an LLM for a specific task, considerations include model size and computational requirements, task-specific performance, context window size, and cost vs. performance. To assess the strengths and limitations of an LLM, benchmarks such as Massive Multitask Language Understanding, Google-Proof Question and Answering, Mathematics Aptitude Test of Heuristics, HumanEval, and Discrete Reasoning Over the Content of Paragraphs (DROP) can be used.
When choosing an LLM provider, factors to consider include privacy and protection, risk and compliance, technology and reliability, bias, fairness and discrimination, transparency and explainability, and accountability and responsibility. Deploying LLMs requires assistance from cloud engineers and software developers due to it falling outside typical actuarial training and expertise.
The panel concluded that current AI tools, such as LLMs, can boost productivity for some tasks, but they haven't evolved enough to replicate actuarial analysis and decision-making. As the use of AI continues to grow in the insurance industry, it is crucial for actuaries to stay informed and engaged in the ethical discussions surrounding its use.
Read also:
- Exploring Harry Potter's Lineage: Decoding the Enigma of His Half-Blood Ancestry
- Elon Musk Acquires 26,400 Megawatt Gas Turbines for Powering His AI Project, Overlooks Necessary Permits for Operation!
- U Power's strategic collaborator UNEX EV has inked a Letter of Intent with Didi Mobility to deploy UOTTA(TM) battery-swapping electric vehicles in Mexico.
- Global Gaming Company, LINEUP Games, Moves Into Extensive Global Web3 Multi-Platform Gaming Network