AI is Here, Is Your Company Ready? (Hint: No)
The scale and scope of artificial intelligence is well-described. Merrill Lynch predicts an “annual creative disruption impact” of $14 to $33 trillion…
November 22, 2017 at 11:51 AM
12 minute read
The scale and scope of artificial intelligence is well-described. Merrill Lynch predicts an “annual creative disruption impact” of $14 to $33 trillion by 2025. Accenture estimates AI could double annual economic growth for 12 developed nations by 2035. Stephen Hawking predicts “the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.”
Already, AI technologies are being adopted in medicine, law, finance, manufacturing, transportation, policing and retail, to name a few examples. And it's been well-reported that AI has the potential to transform our legal system. But how? AI undermines basic assumptions about causation, reliance and the role of human agency and supervision.
While legal efforts are underway to address the military applications of AI, including lethal autonomous weapons, civil commercial law is unprepared. Companies and their lawyers should be thinking now about the changes AI may bring and how to manage the risks where possible.
What Becomes of Tort Law?
High among AI's risks is safety, and AI will test the normal checks and balances of tort law. AI blurs the line between product and service, but even under a strict products liability approach, where questions of fault are set aside, AI can render even basic causation unknowable.
What makes AI powerful—its ability to detect connections among, and perhaps derive meaning from, vast data sets where humans cannot—can make its mistakes inscrutable as well.
Occasionally, in hindsight, a cause might be plain—like the data entry error that led an algorithm to recommend bail for a defendant who then committed murder shortly upon release. More often, causation will be impractical if not impossible to show: AI is coded by programmers, trained on data that may be labeled by people, allowed to evolve, often under human monitoring, producing recommendations that machines or people may act upon (or not).
Add to this complexity the fact that the central algorithms at work may be completely opaque to human minds, even in retrospect. As one court noted recently in the unintended acceleration cases: “To the extent that a software's complexity renders testing unreliable (and thus, useless), sound scientific principles counsel against such testing.” As courts find causation increasingly impossible to untangle, lawmakers may adopt stricter forms of liability that spread costs up and down supply chains, for products and services. Companies should negotiate contractual solutions on the front end, to mitigate this risk.
Professional Judgment
AI will change the way workers interact with technology, testing basic principles of due care and professional liability.
Take the medical field: Fifty percent of hospitals will adopt some form of AI within the next 5 years. While human monitoring predicts heart attacks 30 percent of the time, AI systems reach 80 percent. For the moment, human monitoring of AI systems can improve outcomes. But for how long?
The law rests on traditional notions of human-monitored technology as safer—from the “informed intermediary doctrine” to the exclusion of clinical software from FDA regulation where health care professionals “independently review the basis” for software's recommendations.
But soon, AI may be the better-informed intermediary, and professionals may fail to understand the basis for software's superior suggestions. What then of the doctor, lawyer or air traffic controller who departs from a machine's counterintuitive, data-driven instructions?
Companies will need to monitor industry standards, arrange their human/machine systems accordingly, and allow those systems to evolve as AI and standards of care evolve.
How About Privacy?
Privacy is another area where companies will surely misstep. Already, trace signals in vast data sets are revealing deeply personal traits. One retailer accidentally disclosed a teenager's pregnancy to her father, through algorithms that mailed coupons for maternity products based on shifts in cotton ball and lotion purchases.
In test studies, AI has inferred race, gender, personality, sexual orientation, politics, suicidality and more from opaque data. As our digital footprints disclose more with increasing certainty, companies should anticipate a shift in privacy law from the reasonable expectation of privacy to the reasonable demand for it, changing the question from what is knowable to what should be usable. To paraphrase Supreme Court Justice Sonia Sotomayor, secrecy may cease to be a prerequisite for privacy.
Already, state and federal legislation, some adopted, some proposed, is carving out protected classes of information: genes (GINA); health (HIPAA); children's data (COPPA); biometrics (BIPA); intimate images (IPPA); geolocation (GPS Act); and booklists (RPA), to name a few. But why genes and not other biomarkers? Why books but not magazines? Companies should anticipate and protect such emerging zones of privacy now to minimize risk.
The list of affected bodies of law is vast. For example, algorithms, even facially neutral ones, can discriminate, and creative algorithms can produce IP with no clear owner. All of these issues are better addressed in advance—if they can be clearly identified. AI will soon be an important, if not required, tool across industries. We cannot be sure how fast, but we know it's coming. Companies and their counsel should be planning now to predict and manage risk as these technologies arrive.
Danny Tobey, a Vinson & Elkins partner, is a graduate of Harvard College and Yale Law School. A former software entrepreneur and medical doctor, he has spoken on AI with companies ranging from startups to Fortune 100.
The scale and scope of artificial intelligence is well-described.
Already, AI technologies are being adopted in medicine, law, finance, manufacturing, transportation, policing and retail, to name a few examples. And it's been well-reported that AI has the potential to transform our legal system. But how? AI undermines basic assumptions about causation, reliance and the role of human agency and supervision.
While legal efforts are underway to address the military applications of AI, including lethal autonomous weapons, civil commercial law is unprepared. Companies and their lawyers should be thinking now about the changes AI may bring and how to manage the risks where possible.
What Becomes of Tort Law?
High among AI's risks is safety, and AI will test the normal checks and balances of tort law. AI blurs the line between product and service, but even under a strict products liability approach, where questions of fault are set aside, AI can render even basic causation unknowable.
What makes AI powerful—its ability to detect connections among, and perhaps derive meaning from, vast data sets where humans cannot—can make its mistakes inscrutable as well.
Occasionally, in hindsight, a cause might be plain—like the data entry error that led an algorithm to recommend bail for a defendant who then committed murder shortly upon release. More often, causation will be impractical if not impossible to show: AI is coded by programmers, trained on data that may be labeled by people, allowed to evolve, often under human monitoring, producing recommendations that machines or people may act upon (or not).
Add to this complexity the fact that the central algorithms at work may be completely opaque to human minds, even in retrospect. As one court noted recently in the unintended acceleration cases: “To the extent that a software's complexity renders testing unreliable (and thus, useless), sound scientific principles counsel against such testing.” As courts find causation increasingly impossible to untangle, lawmakers may adopt stricter forms of liability that spread costs up and down supply chains, for products and services. Companies should negotiate contractual solutions on the front end, to mitigate this risk.
Professional Judgment
AI will change the way workers interact with technology, testing basic principles of due care and professional liability.
Take the medical field: Fifty percent of hospitals will adopt some form of AI within the next 5 years. While human monitoring predicts heart attacks 30 percent of the time, AI systems reach 80 percent. For the moment, human monitoring of AI systems can improve outcomes. But for how long?
The law rests on traditional notions of human-monitored technology as safer—from the “informed intermediary doctrine” to the exclusion of clinical software from FDA regulation where health care professionals “independently review the basis” for software's recommendations.
But soon, AI may be the better-informed intermediary, and professionals may fail to understand the basis for software's superior suggestions. What then of the doctor, lawyer or air traffic controller who departs from a machine's counterintuitive, data-driven instructions?
Companies will need to monitor industry standards, arrange their human/machine systems accordingly, and allow those systems to evolve as AI and standards of care evolve.
How About Privacy?
Privacy is another area where companies will surely misstep. Already, trace signals in vast data sets are revealing deeply personal traits. One retailer accidentally disclosed a teenager's pregnancy to her father, through algorithms that mailed coupons for maternity products based on shifts in cotton ball and lotion purchases.
In test studies, AI has inferred race, gender, personality, sexual orientation, politics, suicidality and more from opaque data. As our digital footprints disclose more with increasing certainty, companies should anticipate a shift in privacy law from the reasonable expectation of privacy to the reasonable demand for it, changing the question from what is knowable to what should be usable. To paraphrase Supreme Court Justice
Already, state and federal legislation, some adopted, some proposed, is carving out protected classes of information: genes (GINA); health (HIPAA); children's data (COPPA); biometrics (BIPA); intimate images (IPPA); geolocation (GPS Act); and booklists (RPA), to name a few. But why genes and not other biomarkers? Why books but not magazines? Companies should anticipate and protect such emerging zones of privacy now to minimize risk.
The list of affected bodies of law is vast. For example, algorithms, even facially neutral ones, can discriminate, and creative algorithms can produce IP with no clear owner. All of these issues are better addressed in advance—if they can be clearly identified. AI will soon be an important, if not required, tool across industries. We cannot be sure how fast, but we know it's coming. Companies and their counsel should be planning now to predict and manage risk as these technologies arrive.
Danny Tobey, a
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View All![DOJ's Flawed Thinking in Challenging HPE-Juniper Merger DOJ's Flawed Thinking in Challenging HPE-Juniper Merger](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://k2-prod-alm.s3.us-east-1.amazonaws.com/brightspot/3f/3d/f22b32be44319f59562eb3cef386/ken-cuccinelli-ii-767x633.jpg)
![Observations on Crypto DEXs and the New Broker Reporting Regulations Observations on Crypto DEXs and the New Broker Reporting Regulations](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://k2-prod-alm.s3.us-east-1.amazonaws.com/brightspot/8b/56/183695ea4ac6a185839e9a74eea6/kolstad-charles-767x633.jpg)
![Restoring Antitrust: Returning to the Consumer Welfare Standard Restoring Antitrust: Returning to the Consumer Welfare Standard](https://images.law.com/cdn-cgi/image/format=auto,fit=contain/https://k2-prod-alm.s3.us-east-1.amazonaws.com/brightspot/91/f6/07f499ed4517bf8a7cf27495b622/tom-feeney-767x633.jpg)
Restoring Antitrust: Returning to the Consumer Welfare Standard
Trending Stories
- 1Judge Pauses Deadline for Federal Workers to Accept Trump Resignation Offer
- 2DeepSeek Isn’t Yet Impacting Legal Tech Development. But That Could Soon Change.
- 3'Landmark' New York Commission Set to Study Overburdened, Under-Resourced Family Courts
- 4Wave of Commercial Real Estate Refinance Could Drown Property Owners
- 5Redeveloping Real Estate After Natural Disasters: Challenges, Strategies and Opportunities
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250