Everybody seems worried about Artificial Intelligence and machine learning applications right now. I am too. But we should stop talking about regulating AI.

Many states are taking up AI limitation acts and the EU, which already implemented a limitation on the use of algorithms, is currently considering a deeper set of AI regulations based on risk categories. This year, directed by Congress, the National Institute of Standards in Technology (NIST) just released an Artificial Intelligence Risk Management Framework aimed at developing standards for “trustworthy AI,” which means algorithms that are “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed.” These standards are likely to be influential, particularly in developing machine-learning-based systems for the government. But as we can see by the use of 10-12 impossibly vague terms to define when a computer program is trustworthy, AI has grown into a multi-directional problem that probably can’t and shouldn’t be addressed in broad strokes.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]