Analyzing direct evidence: When an avalanche is actually a snow flurry
Quantifying how far events strayed from what was expected and/or acceptable by calculating the standard deviation of actual event outcomes gives you hard evidence of how big or small the snowfall really was.
March 21, 2014 at 04:00 AM
10 minute read
The original version of this story was published on Law.com
Attorneys are often faced with the task of mining email and other documentary evidence in an attempt to gather sufficient examples that demonstrate something was seriously amiss in their opponents' business. But when does one have enough statistically valid data to discover whether the information shows an avalanche or just a snow flurry?
For example, an attorney may want to demonstrate that the number of customer complaints, product failures or overcharges had become an “avalanche,” thereby warranting class status and perhaps special damages.
Proving an “avalanche” through indirect evidence can be challenging because common language is so often imprecise and relative. One might imagine that an email from a manufacturing company's CEO to a VP lamenting that “product failures have become an avalanche” would be sufficient evidence for a strong case.
But while the CEO might feel that the number of failures has increased to epic proportions, the actual number of additional failures might in fact be very low. A similar statement made by the CEO of a less diligent company might signify a true disaster in the making. One man's avalanche is another's snow flurry.
Sometimes one really needs to evaluate the numbers to know what is going on.
Many attorneys shy away from asking for and directly analyzing event outcomes. They feel that they lack the expertise necessary to do that sort of analysis and are often unable to hire expensive consultants. In reality though, analyzing event logs and quality assurance reports is not very difficult. Programs like Excel make doing the actual analysis merely a matter of entering a string of data and pushing a function key.
Attorneys' reluctance is usually rooted in a lack of clarity about what exactly one is trying to show by running the analysis. In fact, the objective is very simple: We are trying to quantify the difference between what actually happened and what should have happened.
Stated in a more formal mathematical way, you are trying to demonstrate unexpected or excessive variance from an acceptable mean. This article demonstrates how easy it is to gather and analyze this kind of evidence.
To conduct a quantifiable analysis about the degree of variance in a series of events (e.g., a spate of product failures), one must understand the basics of two concepts: standard deviation and expected (or acceptable) defect rate. Used together, these concepts let the events themselves quantify the magnitude of the problem.
What is standard deviation? Simply stated, standard deviation shows you how much variance exists in a series of events. A low standard deviation indicates that the data points tend to be very close to expectations; a high standard deviation indicates that the data points are spread out over a large range of values.
In litigation, one is often trying to demonstrate that at some point in time events veered from their previous, expected or acceptable pattern. The number of errors, failures or accidents was running at one (lower) level, and after some point in time, those incidents rose to a higher level. The way to demonstrate that the increase in problems was indeed of avalanche proportions is to calculate and compare the standard deviation of the events across two time periods. Because the standard deviation is a number and is expressed in the same units as the underlying data, standard deviations are easy to understand and interpret.
Standard deviation is the measure of how actual events differed from what was expected. To calculate the standard we need to know two things: what actually happened and what was expected to happen. How do we know what was expected? Two related concepts from the world of quality assurance (QA) answer this question. The first is the notion of an acceptable quality level (AQL), and the other is often referred to as the “expected defect rate.” In every organization that claims to have some form of formal quality assurance program, one or another of these numbers has been defined and communicated clearly to those building and delivering the “product.” An organization with an active QA program defines the quality level it must meet to fulfill its objectives and keep its customers happy. Depositions are a great place to learn what these numbers are.
AQL is the smallest percentage of defects that can occur and still make a product “lot” acceptable to customers. This number is used in acceptance sampling, an older but still common method of conducting QA. Companies that employ zero defect testing (a more modern method) to track quality standards use a related number, often called the “expected defect rate.” The expected defect rate is the percentage of defects currently experienced on the production line. Comparing these numbers to the actual number of defects experienced during the time period in question gives you a precise measurement of the difference between what actually happened and what the company had expected and perhaps promised would happen.
Without question, gathering additional evidence about “who knew what and when” from email and loose files is very important to proving a position. But it can be equally valuable to quantify what went on. Quantifying how far events strayed from what was expected and/or acceptable by calculating the standard deviation of actual event outcomes against the previous or expected mean gives you hard evidence of how big or small the snowfall really was.
Attorneys are often faced with the task of mining email and other documentary evidence in an attempt to gather sufficient examples that demonstrate something was seriously amiss in their opponents' business. But when does one have enough statistically valid data to discover whether the information shows an avalanche or just a snow flurry?
For example, an attorney may want to demonstrate that the number of customer complaints, product failures or overcharges had become an “avalanche,” thereby warranting class status and perhaps special damages.
Proving an “avalanche” through indirect evidence can be challenging because common language is so often imprecise and relative. One might imagine that an email from a manufacturing company's CEO to a VP lamenting that “product failures have become an avalanche” would be sufficient evidence for a strong case.
But while the CEO might feel that the number of failures has increased to epic proportions, the actual number of additional failures might in fact be very low. A similar statement made by the CEO of a less diligent company might signify a true disaster in the making. One man's avalanche is another's snow flurry.
Sometimes one really needs to evaluate the numbers to know what is going on.
Many attorneys shy away from asking for and directly analyzing event outcomes. They feel that they lack the expertise necessary to do that sort of analysis and are often unable to hire expensive consultants. In reality though, analyzing event logs and quality assurance reports is not very difficult. Programs like Excel make doing the actual analysis merely a matter of entering a string of data and pushing a function key.
Attorneys' reluctance is usually rooted in a lack of clarity about what exactly one is trying to show by running the analysis. In fact, the objective is very simple: We are trying to quantify the difference between what actually happened and what should have happened.
Stated in a more formal mathematical way, you are trying to demonstrate unexpected or excessive variance from an acceptable mean. This article demonstrates how easy it is to gather and analyze this kind of evidence.
To conduct a quantifiable analysis about the degree of variance in a series of events (e.g., a spate of product failures), one must understand the basics of two concepts: standard deviation and expected (or acceptable) defect rate. Used together, these concepts let the events themselves quantify the magnitude of the problem.
What is standard deviation? Simply stated, standard deviation shows you how much variance exists in a series of events. A low standard deviation indicates that the data points tend to be very close to expectations; a high standard deviation indicates that the data points are spread out over a large range of values.
In litigation, one is often trying to demonstrate that at some point in time events veered from their previous, expected or acceptable pattern. The number of errors, failures or accidents was running at one (lower) level, and after some point in time, those incidents rose to a higher level. The way to demonstrate that the increase in problems was indeed of avalanche proportions is to calculate and compare the standard deviation of the events across two time periods. Because the standard deviation is a number and is expressed in the same units as the underlying data, standard deviations are easy to understand and interpret.
Standard deviation is the measure of how actual events differed from what was expected. To calculate the standard we need to know two things: what actually happened and what was expected to happen. How do we know what was expected? Two related concepts from the world of quality assurance (QA) answer this question. The first is the notion of an acceptable quality level (AQL), and the other is often referred to as the “expected defect rate.” In every organization that claims to have some form of formal quality assurance program, one or another of these numbers has been defined and communicated clearly to those building and delivering the “product.” An organization with an active QA program defines the quality level it must meet to fulfill its objectives and keep its customers happy. Depositions are a great place to learn what these numbers are.
AQL is the smallest percentage of defects that can occur and still make a product “lot” acceptable to customers. This number is used in acceptance sampling, an older but still common method of conducting QA. Companies that employ zero defect testing (a more modern method) to track quality standards use a related number, often called the “expected defect rate.” The expected defect rate is the percentage of defects currently experienced on the production line. Comparing these numbers to the actual number of defects experienced during the time period in question gives you a precise measurement of the difference between what actually happened and what the company had expected and perhaps promised would happen.
Without question, gathering additional evidence about “who knew what and when” from email and loose files is very important to proving a position. But it can be equally valuable to quantify what went on. Quantifying how far events strayed from what was expected and/or acceptable by calculating the standard deviation of actual event outcomes against the previous or expected mean gives you hard evidence of how big or small the snowfall really was.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllOld Laws, New Tricks: Lawyers Using Patchwork of Creative Legal Theories to Target New Tech
Lawsuit Against Amazon Could Reshape E-Commerce Landscape
King Kullen—the Nation's First Supermarket—Hires Outside Counsel as GC
Trending Stories
- 1Test
- 2Wilson Sonsini Knocks Out Claims Against Inhibrx Biosciences in Trade Secrets Verdict
- 3Pass Rate on California's July 2024 Bar Exam Ticks Up to 53.8%
- 4High Court Asked To Review DOJ's 'Illusory Promise,' Religious Charter School, Meta Class Action
- 5'Rampant Piracy': US Record Labels File Copyright Suit Against French Distributor Believe
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250