Skip to content

Assessing the Business Integrity in AI Adoption

Professionals grapple with the dilemma of harnessing resources like ChatGPT for maximum advantage while maintaining their responsibility towards the well-being and protection of their users.

By Dr. Kilian Pfahl (*)

Assessing the Business Integrity in AI Adoption

In the tech-driven corporate landscape, Large Language Models (LLMs) such as ChatGPT have emerged as a game-changer, offering extraordinary potential. These models excel at processing vast amounts of data swiftly, providing detailed answers with pinpoint precision. Companies can leverage this technology in manifold ways, especially in high-compliance and data-intensive areas like Mergers and Acquisitions (M&A). However, deploying LLMs responsibly is crucial - recognizing their limits and the risks associated with their usage is paramount to mitigate legal and operational hazards, particularly in respect to corporate due diligence. Here's a heads-up for business decision-makers: How to make the most out of LLMs while staying on the safe side.

M&A Transactions- Boosted by AI

LLMs stand to significantly streamline the M&A due diligence process, especially with their exceptional ability to analyze extensive documents and detect critical contract clauses or potential risks at an early stage. This acceleration in the review phase brings about a more informed basis for decision-making.

In contract management, LLMs can be employed to detect ineffective clauses, suggest necessary adjustments, and bolster the legal security of contracts in crucial areas like non-compete agreements and liability limitations, minimizing the risk of strategic blunders. Moreover, LLMs play a crucial role in compliance monitoring, allowing companies to keep tabs on regulatory changes in areas like data protection and finance, and implement appropriate adjustments timely.

A Veil of inaccuracy?

While LLMs can undoubtedly be a boon for businesses, their pitfalls should not be disregarded. Corporation heads must be thoroughly aware of the shortcomings of this technology to steer clear of errors and associated liability risks. It's essential to remember that LLMs are advanced text generators that predict probable word sequences based on colossal amounts of data. They simulate intelligence by determining which word is most likely to follow the preceding selection of words. Though compelling, these answers may not always be correct.

LLMs lack the human understanding or knowledge and the capacity to validate content or apply substantial cognitive thinking. Their strong suit lies in their ability to recognize statistical relationships between words and implement them coherently in a text. However, the output is merely an outcome of probability calculations, offering no guarantee of accuracy. Consequently, business leaders must never assume LLMs deliver a complete or foolproof solution, especially in areas with profound legal implications. Use these AI-modeled tools only as auxiliary aids, eschewing them as the lone basis for decision-making.

Transparency conundrum

Another risk factor to consider is the opacity of decisional processes. LLMs often operate as black-box models, making their functioning and logic difficult for users to comprehend. However, business leaders are obliged to make decisions based on traceable and transparent evidence. The Business Judgment Rule (§ 93 Abs. 1 AktG) demands that entrepreneurial decisions be transparent and well-reasoned. The lack of transparency could amount to breach of duty, thereby rendering the decision-making process open for liability claims.

Also, results can be skewed by the bias inherent in the training data of LLMs. These models learn from vast datasets that often contain historical biases or uneven representation. These biases unwittingly infiltrate the generated answers, leading to flawed or discriminatory decisions. If nine out of ten training data are factually incorrect or based on questionable findings, it could result in disadvantaging the tenth training data set, even if it provides more valuable or reliable information.

Liability and Corporate Responsibility

The liability of business leaders in connection with the deployment of LLMs is a central legal challenge. According to the Business Judgment Rule, liability is excluded if decisions are made on the basis of appropriate and diligently gathered information. However, the illusion of correctness could lead business leaders to base their decisions on seemingly plausible but erroneous information, thereby exposing them to liability risks.

Therefore, business leaders should exercise caution while utilizing LLMs and not rely solely on AI-generated content as a foundation for their decisions to avoid breaching their duty of care. Employing these AI tools judiciously, within a comprehensive decisional process encompassing human intellect, will go a long way in responsible AI integration. In some cases, AI integration may not only be a viable option but a prerequisite if it markedly improves decision-making efficiency and quality. However, this necessitates well-structured governance, defined processes, and staff training to align with legal requirements and minimize associated risks. Only then can AI be employed responsibly while meeting the business duty of care.

*) Dr. Kilian Pfahl is a Senior Associate at Hogan Lovells.

In the M&A context, artificial-intelligence (AI) technology, particularly Large Language Models (LLMs), can significantly expedite due diligence processes by analyzing extensive documents and identifying potential risks, thus enhancing decision-making efficiency. However, it's vital to remember that technology like LLMs, while advanced, lack human understanding and the capacity to validate content, making them prone to inaccuracies and potential biases. Therefore, business leaders should employ AI tools as auxiliary aids rather than basing decisions solely on AI-generated content, ensuring a balanced decisional process that incorporates human intellect to mitigate the associated risks.

Executives grapple with harnessing the advantages of AI models like ChatGPT while prioritizing their obligation to safeguard users' interests.

Read also:

    Latest