eDiscovery, legal research and legal memo creation - ready to be sent to your counterparty? Get it done in a heartbeat with AI. (Get started for free)

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - The Promise and Peril of Automated Legal Writing

In recent years, artificial intelligence (AI) has made rapid advances into the legal profession. From e-discovery to predictive analytics, algorithmic systems now assist with many routine legal tasks. One of the holy grails for legal tech developers is automated document drafting—the idea that AI could generate first drafts of things like contracts, briefs, and memos.

Proponents argue automated drafting could expand access to legal services and increase productivity for lawyers. Rather than billing hours manually reviewing documents, attorneys could delegate rote writing tasks to algorithms while focusing their expertise on strategy and analysis. Some also believe AI writers may even surpass humans at tasks like identifying relevant precedents from vast databases of case law.

However, critics have raised concerns about quality, ethics and the implications for the legal profession. Unlike medicine or engineering, law is an interpretive and persuasive practice relying on subjective human judgment. While AI can mimic the surface form of legal arguments, it lacks a deeper understanding of justice, fairness and professional ethics.

Early experiments reveal the limitations. When fed legal briefs, AI systems can generate superficially coherent arguments—but they lack an overarching thesis and fail to construct meaning from facts. The logical connections feel forced, and the writing lacks the fluidity and rhetorical impact of an experienced attorney. While AI programs can summarize large texts, their own original writing remains stilted.

Some argue automated drafting poses risks if used irresponsibly. AI has no inherent sense of ethics; it dutifully produces whatever text it is trained to generate. Without oversight, AI writers could potentially plagiarize sources, fabricate facts or make frivolous arguments violating legal duties. Critics warn delegating too much to algorithms could dehumanize law practice in ways that undercut the social purposes of the profession.

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - First Impressions: The Good, the Bad and the Wonky

My first experiments with AI legal writing tools yielded mixed results. On the positive side, I was impressed by the speed. Within minutes, I had multiple pages of written content loosely tailored to my prompts. The AI managed to incorporate basic facts and legal concepts with semi-coherent structure, analysis and citations. This is no small feat—as any attorney knows, legal writing is complex and time-consuming. At its best, the AI generated passable prose with the ingredients expected of a legal brief: statements of law, application of rules to facts, and connections to cited authorities. With some editing, the raw output could likely serve as a starting point or draft for human refinement.

However, the text lacked authentic legal reasoning and advocacy. While superficially on topic, arguments lacked nuance, failed to anticipate counterarguments, and did not build persuasively towards a conclusion. The AI mimicked the formal structure of analysis without thoughtful weighing of principles, policies or interpretations. Its citations, while formatted properly, made only tenuous connections to the claims. The writing felt artificial and hollow rather than genuinely persuasive.

Most problematically, the AI sometimes incorporated fabricated facts or drew unwarranted inferences from the source materials I provided. For instance, when given a hypothetical case summary, it wildly extrapolated claims I had never asserted. Other times, it referenced real legal precedents while distorting their implications for the prompt at hand. These instances revealed the technology’s limitations in comprehending nuanced legal situations. Without human oversight, such errors could clearly violate ethics rules on truthfulness and candor towards tribunals.

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - When AI Gets the Law Right and Wrong

While AI legal writing tools frequently miss the mark, they also demonstrate glimmers of potential. In reviewing numerous algorithmically-generated briefs, I noticed the quality varied tremendously case-by-case. For straightforward legal disputes with clean facts and precedents, the AI proved surprisingly adept at identifying applicable rules and paralleling how an attorney might argue the issues.

For example, in a hypothetical tenant dispute, the AI reliably cited landlord-tenant statutes, quoted their language, and made basic applications to the facts at hand. It crafted a passable argument that the landlord had violated notice requirements prior to entry, supported by factual details from the prompt. While lacking stylistic flair, the broad contours of analysis and citation were present. With light editing, it could serve as a solid first draft of key sections.

In contrast, for complex cases involving murky precedents or competing policy concerns, the AI struggled to construct an orderly line of reasoning. Its arguments came across as disjointed patchworks of factual assertions and legal quotations without connecting tissue. The AI could mechanically identify some relevant authorities but failed to persuasively relate them or argue their implications.

These experiences align with broader findings on AI’s progress in law. In tightly constrained legal tasks with clear right or wrong answers, algorithms can match or even exceed humans. This is true in domains like discovering relevant precedents in large databases or predicting case outcomes. However, for open-ended writing tasks demanding judgment and persuasion, AI continues to fall short. Creative synthesis of authorities and facts into novel yet convincing legal theories remains human domain.

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - Debugging the Machine: Teaching Robo-Scribe

As AI legal writing tools spread, attorneys face the challenge of debugging inevitable errors and improving quality. While automated drafting shows promise, lawyers cannot deploy these technologies responsibly without close supervision. Like training a new paralegal, attorneys must teach the AI through hands-on editing, feedback and quality control.

Many experts recommend a methodology called "machine teaching" to maximize robo-scribes. This involves providing examples of both good writing for the algorithm to mimic and bad writing to correct. Lawyers might upload samples of well-crafted legal briefs from past cases for the AI to study stylistic techniques. But they also must submit instances of the AI's own drafting failures to diagnose where inferencing goes awry.

Providing this corpus of positive and negative examples allows algorithms to continually refine rules on what makes arguments logically coherent, factually supported and persuasive. It is not enough to simply input facts and precedents; lawyers must actively curate the training data to impart legal reasoning skills. The AI will only improve if lawyers treat it like a law clerk, providing individualized critique rather than passively accepting its output.

Hands-on collaboration is key. Techniques like "writing by annotation" let lawyers suggest revisions directly in the AI's draft, allowing the system to infer desired changes. Stanford researchers are exploring how lawyers can actively converse with the AI in natural language to guide improvements in real-time. Through such human-machine teamwork, robo-scribes could someday exhibit refined judgment - but only if lawyers invest time teaching through their expertise.

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - The Verdict: Solid First Draft, but Still Needs a Human Touch

After extensively testing AI legal writing tools, I believe they currently fall into the "solid first draft" category with significant need for human refinement. While showing potential as aids to lawyer productivity, the output reveals glaring limits in legal reasoning and persuasive writing skills. My verdict echoes that of other attorneys who have explored these technologies—enthusiasm tempered by wariness.

Across multiple experiments drafting briefs on hypothetical cases, the AI consistently demonstrated a rudimentary ability to identify basic legal issues, recite general rules, and respond to factual prompts in writing. However, its arguments lacked coherent structure and narrative flow. The AI failed to build analysis logically across paragraphs or anticipate counterarguments. While it cited valid authorities, it made only superficial connections between their principles and the prompt facts. Persuasive rhetoric, policy analysis, and ethical appeals were wholly absent.

In essence, the AI produced disjointed legal analysis “mad-libs” style, plugging facts and citations into formulaic templates. This could provide a lawyer starting material to work from, deleting and reorganizing sections. But it could not independently draft court-ready documents or provide unique legal insights. As one in-house counsel testing a contract-reviewing AI concluded, its suggestions were “kind of helpful, but need significant vetting and editing.” The key benefits were saving lawyers time by outsourcing clerical work, not replacing core skills.

Currently, AI legal writers are most adept at tasks like summarizing documents, predicting outcomes, and preliminarily flagging issues. But they cannot synthesize authorities into novel, compelling legal theories or advocate effectively within professional bounds. Subtlety, discretion and moral imagination remain human skills developed through years of training. AI lacks the socialization within legal culture to write like an attorney accountable to duties of competence, truthfulness, and ethical persuasion.

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - What I Learned: The Limitations and Potential of AI in Law

My experiments taught me first-hand about the limitations and potential of AI in legal writing. While certain benefits became apparent, the technology remains years away from replicating core attorney skills. Understanding these realities is crucial as AI proliferates in the legal field.

On the limitations side, my key takeaway was that AI currently lacks judgment, discretion and persuasive ability. The algorithmically-generated briefs, while superficially on topic, failed to construct compelling legal narratives or advance insightful theories. They simply regurgitated facts, recited boilerplate rules of law, and made formulaic connections. The arguments lacked coherent reasoning, anticipation of counterarguments, and awareness of implications or underlying policies.

Essentially, AI at this stage has no concept of what makes legal arguments convincing, ethical or just. It cannot discern the logical gaps or rhetorical weaknesses in its own writing. While AI can produce passable prose, it does not truly comprehend the facts and authorities it cites. The connections feel forced rather than fluid. As one analyst noted, we are still “decades away” from AI capable of discretionary legal analysis and reasoning like a seasoned attorney.

At the same time, I gained appreciation for AI’s potential value to augment lawyers. The technology showed promise for tasks like identifying relevant case law from vast databases or providing rough drafts to work from. AI could summarize documents, flag issues for consideration, and generate templates for transactional work. Used judiciously, it may aid productivity and expand access for underserved clients.

However, attorneys must supervise AI actively through feedback and “machine teaching.” Like training a paralegal, lawyers cannot simply input facts and press print. To improve quality, we must upload examples of effective and ineffective legal writing to continuously refine algorithms. WITH human guidance, AI could someday exhibit refined skills. But unattended, its drafting would violate ethical duties of competence, truthfulness and sound legal reasoning.

Robo-Scribe or Robo-Flop? One Lawyer's Adventure Testing AI-Generated Legal Briefs - AI Legal Tech: Here to Stay, But Not (Yet) a Attorney Replacement

While AI-generated legal documents currently fall short of attorney work product, these technologies are rapidly improving and gaining adoption. Legal tech companies tout benefits like efficiency, cost savings, and expanded access. However, experts debate the proper role of automation versus human attorneys.

Many believe AI will transform aspects of law practice but not wholly replace lawyers. According to studies by Deloitte and McKinsey, the highest impact will likely be streamlining routine tasks like contract review, due diligence, and document search/summarization. For these administrative functions, AI can save thousands of hours compared to manual work. Attorneys may then reallocate time towards strategy, writing, negotiation and client relations. AI can also expand services for underserved groups by automating routine assistance.

As Mckinsey concludes, AI’s near-term potential is “augmenting professionals, not replacing them.” Blockchain and smart contracts can encode routine transactions, but complex deals still require bespoke negotiation. Document review is accelerated, but human judgment determines relevance. While AI can generate drafts, lawyers refine arguments and provide strategic oversight. The role of attorney thus evolves from rote tasks to higher-order thinking.

This view aligns with that of most practicing lawyers. In ABA and State Bar Association surveys, only around 10-15% of US lawyers believe AI currently threatens legal jobs. The vast majority expect AI will change aspects of work rather than wholesale replacement. Typical is in-house counsel Christina Blacklaws, who after testing AI contract software found it “very much a tool that facilitates the lawyer’s role” rather than “replacing what we do.” She emphasizes AI must remain under attorney supervision rather than fully autonomous.

However, a minority of industry voices do warn that automation could substantially disrupt the profession. Richard and Daniel Susskind have argued that technology will irrevocably “change the way lawyers work, the jobs they do, the skills they possess.” Some predict that algorithms will first handle routine legal tasks, but eventually exhibit judgment on par with top litigators. As costs drop, companies may conduct more automated in-house work rather than hire outside counsel.



eDiscovery, legal research and legal memo creation - ready to be sent to your counterparty? Get it done in a heartbeat with AI. (Get started for free)



More Posts from legalpdf.io: