As a lawyer and now a judge who’s followed developments in artificial intelligence for years, I was pleased to have the chance to speak on this topic at the recent Ninth Circuit Conference and appreciated the article in The Recorder covering the panel on which I appeared. But I want to clarify my views on the potential risks posed by “superintelligence” discussed in philosopher Nicholas Bostrom’s influential book, “Superintelligence: Paths, Dangers, Strategies” (2014).

Bostrom argues that technological developments could eventually lead to forms of machine-based intelligence that sufficiently exceed human intelligence to pose an existential risk to humanity. During the panel, I emphasized instead the role that humans, rather than intelligent machines, may play in deploying and in some cases misusing the capabilities of automated technologies equipped with the capacity to use lethal force.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]