Pet rocks, Cabbage Patch Kids, Tickle-Me-Elmo, and the latest holiday gift phase. Some things explode in the media and quickly flame out. Is all the talk about ChatGPT taking (and passing) bar exams, writing demand letters, and supplying sections to briefs a similar shooting star? Or is there substance? And if so, where? And what are the ethical limits as applied to dispute resolution?

Lawyers thinking about AI initially focus on computers’ ability to perform, more efficiently, some routine litigation tasks such as reviewing, analyzing, and organizing emails and large amounts of data into manageable lists and bites. eDiscovery is a problem that technology has created, so it makes sense that technology should be enlisted to solve the problem. Keyword searches and de-duplication do help to separate the wheat from the chaff, but there are limits to relying on algorithms to present decision-makes with “key” documents. It is not just that the coder (who writes the algorithms) can bring to the task his or her biases or (in)experience. Rue the day — already here — when the computer writes the code. Even the most ardent supporters of AI recognize that AI efficiencies do not replace human judgment and experience.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]