How to write better AI prompts for legal work

Read time
6
min
Written by
Gabby MacSweeney
Published on
September 12, 2025

AI can accelerate legal work, but the results depend on the quality of the instructions you give it. Prompts are instructions that set the scope, constraints, and output format for the task. The right prompt can speed up review, drafting and responding while improving accuracy. The wrong prompt can waste time, create rework, or introduce risk.

There’s also the question of where to prompt. Many legal teams now find themselves in a mixed environment. Some colleagues are experimenting with generic AI tools for quick tasks. Others are already using legal-specific AI that is integrated into secure workflows. Each has a place, but the way you prompt them should be different.

Why prompts matter

Most modern legal AI tools are powered by large language models (LLMs). An LLM is a type of artificial intelligence trained on massive amounts of text. Instead of storing facts, it learns patterns in how words are used and generates likely continuations of text.

This means it can draft clauses, summarise contracts, or answer questions in natural language, but it may also produce inaccurate or incomplete results.

The instructions you give it (your prompt) make a huge difference. If the prompt is vague (“make this better”), it has to guess what you mean. If the prompt is clear and structured (“rewrite this clause in plain English, 150 words maximum, under English law”), it has a clear path to follow and the results are far more reliable.

Think of it like delegating to a junior associate. Without context, you may not get what you expect. With precise instructions, you are more likely to receive useful work.

How LLMs respond to prompts

To understand why some prompts work better than others, it helps to know a few basics:

Attention: Unlike us, who read text line by line, an LLM processes all the words it sees at once. It “pays attention” to every part of your input. That means if you include examples of bad drafting, it may focus on them and reproduce parts of them. For drafting, always show good examples only. Negative examples can be used for classification (e.g. “choose which of these is correct”) but not for generation.

Probabilities: The model does not pick the same word every time. At each step, it chooses from a set of likely next words. This is why you may see different answers to the same question. More structured prompts reduce this variation.

Task complexity: If you ask the model to do something very complex in one go (e.g. “review this 50-page contract and draft a full report”), the results will be weaker. Breaking the task into smaller steps like plan, outline, draft will produce more accurate outputs.

Memory limits: Some models only process part of a long document. If you upload a contract and the answer seems incomplete, the model may only have read the first few pages. Splitting documents into sections solves this.

Practical prompting tips for lawyers

The quality of what you get from an LLM depends heavily on what you put in. Good prompts give the model context, direction, and boundaries. Here’s what to keep in mind:

1. Provide context

Tell the system who you are and what you need. This frames the task.

Example: “I am a commercial lawyer reviewing NDAs. Summarise the key confidentiality obligations.”

2. Be precise

Define the tone, length, and scope you want. If you just say “improve this,” the system has to guess.

Example: “Summarise this clause in plain English, maximum 120 words, under English law.”

3. Use positive examples

Show the format you want such as bullet points, a table, or a redline. Avoid showing “bad” examples, because the system still pays attention to them.

Example: “List obligations in three columns: Party, Duty, Typical Pitfall.”

4. Break down tasks

LLMs work best step by step. Do not ask for a full draft of a 20 page contract in one go. Begin with an outline, then expand sections, then refine wording.

Example: First: “Create an outline of the standard sections in a SaaS agreement.” Then: “Draft the data protection section.”

5. Set guardrails 

Say what the system must do, not just what it must avoid. Negative instructions (“do not do X”) can still lead to mixed results.

Example: “Always quote case law directly.”

6. Reuse prompts

Save a standard set of instructions that work well for you and paste them into new sessions. This avoids repetition and keeps outputs consistent.

7. Use system prompts for consistency

Some users often create a “system prompt” which is a longer introduction that explains your role, objectives, and style rules. In generic tools such as ChatGPT, this must be re entered each time you start a new chat. Legal specific platforms like LEGALFLY keep your context in memory automatically, which removes a lot of the effort.

Prompt library for legal tasks

Here are some ready-to-copy examples that you can adapt. Each follows the structure of Role → Task → Outcome → Guardrails.

Contract review

Prompt:

Role: UK commercial lawyer.

Task: Review this supplier agreement for GDPR compliance.

Outcome: Highlight clauses on data retention, transfers, and access rights that deviate from standard practice. Suggest alternative wording where needed.

Guardrails: English law only. Quote exact clauses rather than paraphrasing.

Compliance checks

Prompt:

Role: Compliance counsel.

Task: Check this outsourcing contract against DORA requirements.

Outcome: List any non-compliant clauses with the specific DORA article they conflict with and propose corrections.

Guardrails: Use current regulatory language. Provide references.

Clause comparisons

Prompt:

Role: Contracts counsel.

Task: Compare the limitation of liability clauses in these two contracts.

Outcome: Table with columns: Issue | Contract A | Contract B | Standard Position.

Guardrails: Identify material differences only. Do not restate entire clauses.

Drafting

Prompt:

Role: UK technology lawyer.

Task: Draft a service agreement for IT support based on our standard terms.

Outcome: Full draft including confidentiality, liability, and data protection clauses.

Guardrails: GDPR compliance required. Use plain English. Limit indemnities to direct losses only.

Why general AI has drawbacks

Generic AI tools like ChatGPT or Claude are powerful, but they are not designed for legal-grade use. Their main limitations for legal teams are:

Security risks – Anything pasted into a public AI system may be processed or stored outside your control. This creates risks if client, counterparty, or employee data is included. If you do use a general AI tool, always remove sensitive information first. Replace names, dates, numbers, or company details with placeholders like [Client], [Amount], or [Date]. Never paste full contracts or regulated content.

No legal context – Generic models are not tuned for contracts, regulations, or your firm’s playbooks. Outputs can look convincing but miss key details.

Limited memory – Long contracts may be cut off, with the system only analysing the first few pages.

No consistency – Each session starts fresh, so you must keep re-explaining your role, style, and objectives.

For these reasons, general AI should only be used for low-risk, legal-adjacent tasks where no sensitive information is involved.

Get started with LEGALFLY

Schedule a call with a LEGALFLY expert to discover how legal AI can streamline your work, from research and contract review to drafting and due diligence.

Legal AI you can trust with sensitive data

Legal AI platforms like LEGALFLY are designed specifically for legal teams. They build on the same underlying large language models (LLMs) but add safeguards and features that make them safe for high-value legal work:

Data protection by design – Sensitive information is anonymised before processing. Nothing is exposed to external environments without controls.
Legal context included – The platform applies your playbooks, internal policies, and risk thresholds so outputs match your standards.
Consistency – Unlike generic tools, LEGALFLY remembers your role and style across tasks. You do not need to re-prompt for every new session.
Plain language access – You can ask questions about contracts, clauses, or regulations in everyday language and get structured, legally sound responses.
This means you can upload full contracts, check compliance with regulations like GDPR or DORA, and generate drafts or comparisons without worrying about data leaks or missed nuances.

Before vs after: general AI and legal AI

General AI (ChatGPT/Claude):

“Review this NDA and flag risks.”

→ Output: A list of generic issues (governing law, liability, termination), with no alignment to your template or playbook. Risk of sensitive data being stored externally.

LEGALFLY:

“Review this NDA against our standard template. Highlight any changes to confidentiality or data handling, and suggest wording from our playbook.”

→ Output: Redlined clauses showing deviations, aligned to your template, anonymised during processing, with audit trail for compliance.

This is the difference between experimenting with a public AI tool and adopting a legal AI environment designed for real client work. With LEGALFLY, you get the speed of AI without losing control of your data, standards, or accountability.