Legal Group Sets AI Guidelines

The New York State Bar Association’s report is designed to respond to ethics and privacy concerns stoked by tools like ChatGPT.

The New York State Bar Association has released guidelines to its members on the use of generative AI with a goal of addressing privacy breaches, ethical missteps, and “hallucinated” case briefs.

The NYSBA’s report [PDF], published earlier this month, is the product of a months-long process developed by a dedicated Task Force on Artificial Intelligence, one of the first acts taken by president Richard Lewis. “We pick issues [for task forces] because they’re the issues that are in our face right now,” he said. “And you can’t find anything that’s more in-your-face than AI—it’s a subject that is on everyone’s mind.”

AI is not a partner. It’s not a lawyer.

NYSBA President Richard Lewis

The task force is also responding to some recent high-profile embarrassments in the legal profession, including an incident last year where two New York lawyers were sanctioned by a federal judge for using generative-AI tool ChatGPT to create a legal brief that included fake case citations. The report includes a discussion of that particular case and the way AI tools and large language models can “hallucinate” content. 

“That was one of the things that hit us squarely in the face,” Lewis said. “What transpired in that particular case is that the attorneys, for whatever reason, did not check their work…. It’s a reminder that AI is not a partner. It’s not a lawyer. While it may be able to think in search terms, it’s not something you can totally depend upon. You still have to be involved.”

To address the range of concerns facing the profession, Lewis, NYSBA staff, and the task force chair, Vivian Wesson, brought a diverse range of experiences to the group. 

“We wanted it to be representative of our membership,” Lewis said. “We have people who are in Manhattan and people in upstate New York, and they’re affected similarly in some ways and differently in others….We also wanted to bring in academia because law-school professors are the people who are going to be teaching the next generation of lawyers.”

Among the task force’s recommendations are guidance on protection of private client data to protect attorney-client privilege and requirements that attorneys tell clients when and how AI tools are used. 

“A lot of the time the information accumulated by lawyers is confidential, and we don’t want that to somehow become put into the general vocabulary,” he said. “That’s a huge concern…. The idea of certain things being fed into your confidential notes being part of the public weal, that that’s just not acceptable.”

Since the publication of the task force’s report, Lewis and other NYSBA representatives have spoken with members of New York’s state legislature as well as the staff of the state’s U.S. congressional delegation to explore the report’s findings. Among the concerns going forward is whether regulation around generative AI needs to be industry-specific or if one-size-fits-all guidelines can be applied. 

“It’s a big question: What are we going to do about regulating this situation?” he said. “Elon Musk may have one view, and Jeff Bezos may have another view.”

The post Legal Group Sets AI Guidelines appeared first on Associations Now.