Insights Library

Using AI for Legal Advice and Strategy

Joel Snyder & Jonathan Hourigan

Artificial intelligence (AI) tools are increasingly being used by individuals, businesses and even other professionals to answer legal questions, draft documents, prepare briefs and “sense-check” legal strategies. While AI can be a useful productivity aid, there are significant risks in relying on it for legal advice or strategic decision-making.

At best, AI can assist with general background information. At worst, it can produce material that is confidently wrong, incomplete, or legally dangerous.

AI Does Not Understand the Law — It Predicts Text

AI systems do not “know” the law. They do not reason like lawyers, weigh evidence, interpret legislation, or apply precedent in a principled way. Instead, they predict text based on patterns in large datasets. It also draws information from other sources without the ability to verify the data.

This distinction matters. Legal advice is not about producing plausible-sounding answers; it is about applying the correct law, to the correct facts, at the correct time, within the correct jurisdiction. It is also about the experience of a practitioner that can make the difference in outcome.

 AI cannot deliver on this.

 AI Frequently Gets the Law Wrong

 We have already seen, AI tools regularly:

  • Misstate legislation (including inventing sections that do not exist);
  •  Apply repealed or amended laws;
  •  Confuse jurisdictions (for example, mixing Victorian law with NSW or overseas frameworks)
  • Misapply case law or cite cases that do not exist
  • Oversimplify complex statutory schemes such as planning controls, acquisition processes or compensation frameworks
  • Fail to identify critical issues for consideration

Because the output is written fluently and with confidence, these errors are often difficult for non-lawyers to detect.

AI Lacks Context and Strategic Judgment

Legal strategy is not just about what the law say; it is about risk, timing, leverage, commercial outcomes and human behaviour.

AI cannot assess:

  • How a regulator, council or authority is likely to behave
  • Whether a matter should be escalated, settled or strategically delayed
  • The commercial consequences of a particular legal position
  • How one decision will affect later negotiations or litigation
  • What looks legally “correct” in isolation can be strategically disastrous in practice.
  • Key issues for analysis

AI Cannot Replace Professional Judgment or Accountability

Lawyers are bound by professional duties; including duties of competence, confidentiality and loyalty. They are accountable for the advice they give. AI has no such rules.

… and yes, we can always tell when AI has drafted a document or email 

Confidentiality and Data Risks

Using AI tools to input sensitive information can create serious confidentiality risks as the information can then become a part of the public domain.

Once information is entered into a third-party AI system, control over that data may be lost. This is particularly dangerous in matters.

Where AI Can Be Useful

We appreciate that AI can be a helpful tool and people should not be dissuaded from using it, however its use should be limited.

In our human experience, it should never be relied on for:

  • Legal advice and reviewing law
  • Strategic decision-making
  • Interpretation of legislation or planning controls
  • Compensation strategy or negotiation positions
  • Advice where rights, value or timing materially matter
  • Preparing briefs or legal letters

We have had many recent experiences where we have readily identified shortcomings of using AI, including:

  • other law firms “proofing” our advice via AI tools for a client or family member, which has made up cases which did not exist;
  • clients drafting letters which adopt questionable strategies and, again, cite legislation that does not exist;
  • receiving AI-generated briefs which try to identify key issues and miss important data;
  • having clauses drafted for deeds which would not be commercially acceptable.

Our Position

At Best Hooper, we actively engage with technology, including AI, as a tool, not a decision-maker or for any legal analysis. It can be a good research tool where appropriate. We appreciate that it has a time and place, however it will not replace our experience. 

This article was originally drafted by AI, however the author has then spent considerable time in reviewing, adding “human value” and then publishing. 

For clients, if the matter is important enough to ask an AI, it is important enough to get proper legal and strategic advice.

Joel Snyder

Partner
view profile

Jonathan Hourigan

Partner
view profile