eDiscovery, legal research and legal memo creation - ready to be sent to your counterparty? Get it done in a heartbeat with AI. (Get started for free)

Robotic Regret? AI Assistant Could Have Avoided BigLaw's $175M Contract Snafu

Robotic Regret? AI Assistant Could Have Avoided BigLaw's $175M Contract Snafu - The Perils of Poor Proofreading

Law firms rely heavily on proofreading to catch errors before sending out important legal documents. But with the rise of AI contracting assistants and eDiscovery tools, some firms may be tempted to skip this crucial step. The risks of doing so came into stark relief recently when DLA Piper was on the hook for $175 million due to a missing "not" in an M&A contract.

Though AI can help expedite document drafting and review, it is not immune to oversight. Unlike humans, algorithms don't have common sense to catch absurdities or inferences. They take instructions literally, which can lead to costly mistakes if the input text contains errors. Lawyers at DLA Piper learned this lesson the hard way when their AI contract reviewer let a critical negation slip through the cracks. The result was a guarantee that completely reversed the intent of the parties.

This high-profile case underscores why proofreading must remain a key part of any AI workflow. Though AI can help surface relevant information from large datasets, humans still reign supreme at contextual nuance. Key details like subtle negations are easy for algorithms to miss but jump out to an experienced legal reader. Even AI-generated text should be carefully proofed rather than blindly accepted.

Many law firms are now integrating Human-in-the-Loop review into areas like eDiscovery and contract review. This allows AIs to handle repetitive tasks rapidly while lawyers focus their skills on catch subtleties. Though AI has made great strides, most experts agree human oversight is still essential for high-stakes legal work. No algorithm is immune to quirks and gaps in training data.

Robotic Regret? AI Assistant Could Have Avoided BigLaw's $175M Contract Snafu - Automating Agreement Analysis

As artificial intelligence capabilities have advanced, some law firms have begun exploring how algorithms can assist with analyzing and extracting key information from legal agreements. Though AI agreement analysis tools show promise, they also come with pitfalls that firms should carefully consider before full implementation.

On the upside, AI programs can rapidly scan large volumes of contracts and identify common provisions, obligations, terms and conditions. This allows lawyers to get a high-level overview of an agreement rather quickly, without having to pore over every word manually. Some algorithms can even extract specific data points from contracts, such as party names, dates, governing laws and liability limits. This information can then be aggregated into spreadsheets or databases for easy sorting and analysis.

However, current AI capabilities are still limited when it comes to fully understanding legal nuances and obligations. Algorithms may identify the presence of a certain provision, but not necessarily interpret its full meaning or implications. There are also risks around consistency - an AI trained on commercial contracts may misanalyze the clauses in specialized agreements common in certain practice areas.

As Tracy Rothenberg, Head of the Innovation Group at Bradley Arant Boult Cummings LLP, noted, "œAI contract review tools are unable to discern the meaning of undefined terms that might have special meaning within a particular industry." Care deeply familiar with the subject matter is still needed.

The key, most experts agree, is finding the right balance between automation and human oversight. As Kirat Kharode of ClearlyRated recently wrote, "œThe goal should be to create a partnership between legal professionals and artificial intelligence that plays to both sides"™ strengths." Allowing AI to handle rote extraction tasks enables lawyers to focus their skills on high-value analysis and advice.

Firms exploring AI contract review should pilot programs carefully before full rollout. Iteratively training algorithms on samples of existing agreements from their practice can help improve consistency and accuracy. Vetting AI-generated summaries against lawyer analysis can reveal areas for improvement. And keeping humans in the loop, especially on high-risk contracts, avoids potentially costly errors.

Robotic Regret? AI Assistant Could Have Avoided BigLaw's $175M Contract Snafu - Early Warnings from Lexical Loopholes

As AI tools take on more responsibilities in legal document creation and review, firms must remain vigilant about potential loopholes introduced by the technology's literal interpretations. Seemingly small lexical oversights can open dangerous gaps vulnerable to exploitation if not caught early.

A prime example comes from Georgetown Law's Center on Privacy and Technology. In 2016, they reviewed thousands of personal privacy policies from major companies. Many included statements like "œWe do not sell personal information to third parties." Seems airtight, right?

But when researchers looked closer, they realized many policies contained separate clauses allowing "œthird parties" access to data for analysis or service provision. So companies could claim they don"™t "œsell" data while still providing it to others. This loophole allows personal information sharing under the guise of normal business operations.

Such lexical trickery in privacy policies and other legal documents is easy for an AI tool to miss. Algorithms interpret language literally rather than holistically. They would see "œwe do not sell" as prohibiting sales while ignoring the side-door data swaps buried elsewhere. Only human legal expertise can identify the deceptive interplay between provisions.

Georgetown"™s report rings alarm bells for firms adopting AI document review. It highlights the technology"™s Achilles heel - algorithms cannot infer beyond what is explicitly stated, no matter how misleading the overall picture may be. This allows crafty attorneys to draft documents that appear airtight to AI tools while containing glaring holes and inconsistencies.

"œWe found that even the most privacy-protective policies leave the door open for nearly unfettered disclosure of consumer data as long as it"™s not called "˜selling,"™" says John Davisson, senior counsel at EPIC. "œAI could easily miss this nuance."

To avoid being burned, firms must implement early warning systems. Runningsamples of AI-reviewed documents past human experts can detect areas where machines missed deceptive loopholes. Building test sets with intentionally problematic language forces algorithms to confront lexical limitations. And integrating ongoing human oversight into AI workflows rather than "set and forget" implementation allows subtleties to be caught before documents go out.



eDiscovery, legal research and legal memo creation - ready to be sent to your counterparty? Get it done in a heartbeat with AI. (Get started for free)



More Posts from legalpdf.io: