Ask almost anyone to define artificial intelligence, and you’ll likely get an answer that sounds like something out of a sci-fi movie. A few real-life examples might come up, with self-driving cars, IBM’s Watson winning “Jeopardy!,” and so on. But, for many, AI still evokes a sense of futuristic fantasy.

Defining AI is hard. A common (if vaguely circular) definition is the replication of human thought processes—language, learning, decision-making, and so on—by computers. In part because it’s hard to define AI in general, it’s also hard to say what particular technologies are or aren’t AI. Optical character recognition seemed like AI until it made its way into almost every ATM in order to read deposited checks (and it doesn’t help that “optical character recognition” isn’t quite as awe-inspiring as “a computer that can read”).

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]