Attorneys are often faced with the task of mining email and other documentary evidence in an attempt to gather sufficient examples that demonstrate something was seriously amiss in their opponents' business. But when does one have enough statistically valid data to discover whether the information shows an avalanche or just a snow flurry?

For example, an attorney may want to demonstrate that the number of customer complaints, product failures or overcharges had become an “avalanche,” thereby warranting class status and perhaps special damages.

Proving an “avalanche” through indirect evidence can be challenging because common language is so often imprecise and relative. One might imagine that an email from a manufacturing company's CEO to a VP lamenting that “product failures have become an avalanche” would be sufficient evidence for a strong case.

But while the CEO might feel that the number of failures has increased to epic proportions, the actual number of additional failures might in fact be very low. A similar statement made by the CEO of a less diligent company might signify a true disaster in the making. One man's avalanche is another's snow flurry.

Sometimes one really needs to evaluate the numbers to know what is going on.

Many attorneys shy away from asking for and directly analyzing event outcomes. They feel that they lack the expertise necessary to do that sort of analysis and are often unable to hire expensive consultants. In reality though, analyzing event logs and quality assurance reports is not very difficult. Programs like Excel make doing the actual analysis merely a matter of entering a string of data and pushing a function key.

Attorneys' reluctance is usually rooted in a lack of clarity about what exactly one is trying to show by running the analysis. In fact, the objective is very simple: We are trying to quantify the difference between what actually happened and what should have happened.

Stated in a more formal mathematical way, you are trying to demonstrate unexpected or excessive variance from an acceptable mean. This article demonstrates how easy it is to gather and analyze this kind of evidence.

To conduct a quantifiable analysis about the degree of variance in a series of events (e.g., a spate of product failures), one must understand the basics of two concepts: standard deviation and expected (or acceptable) defect rate. Used together, these concepts let the events themselves quantify the magnitude of the problem.

What is standard deviation? Simply stated, standard deviation shows you how much variance exists in a series of events. A low standard deviation indicates that the data points tend to be very close to expectations; a high standard deviation indicates that the data points are spread out over a large range of values.

In litigation, one is often trying to demonstrate that at some point in time events veered from their previous, expected or acceptable pattern. The number of errors, failures or accidents was running at one (lower) level, and after some point in time, those incidents rose to a higher level. The way to demonstrate that the increase in problems was indeed of avalanche proportions is to calculate and compare the standard deviation of the events across two time periods. Because the standard deviation is a number and is expressed in the same units as the underlying data, standard deviations are easy to understand and interpret.

Standard deviation is the measure of how actual events differed from what was expected. To calculate the standard we need to know two things: what actually happened and what was expected to happen. How do we know what was expected? Two related concepts from the world of quality assurance (QA) answer this question. The first is the notion of an acceptable quality level (AQL), and the other is often referred to as the “expected defect rate.” In every organization that claims to have some form of formal quality assurance program, one or another of these numbers has been defined and communicated clearly to those building and delivering the “product.” An organization with an active QA program defines the quality level it must meet to fulfill its objectives and keep its customers happy. Depositions are a great place to learn what these numbers are.

AQL is the smallest percentage of defects that can occur and still make a product “lot” acceptable to customers. This number is used in acceptance sampling, an older but still common method of conducting QA. Companies that employ zero defect testing (a more modern method) to track quality standards use a related number, often called the “expected defect rate.” The expected defect rate is the percentage of defects currently experienced on the production line. Comparing these numbers to the actual number of defects experienced during the time period in question gives you a precise measurement of the difference between what actually happened and what the company had expected and perhaps promised would happen.

Without question, gathering additional evidence about “who knew what and when” from email and loose files is very important to proving a position. But it can be equally valuable to quantify what went on. Quantifying how far events strayed from what was expected and/or acceptable by calculating the standard deviation of actual event outcomes against the previous or expected mean gives you hard evidence of how big or small the snowfall really was.

Attorneys are often faced with the task of mining email and other documentary evidence in an attempt to gather sufficient examples that demonstrate something was seriously amiss in their opponents' business. But when does one have enough statistically valid data to discover whether the information shows an avalanche or just a snow flurry?

For example, an attorney may want to demonstrate that the number of customer complaints, product failures or overcharges had become an “avalanche,” thereby warranting class status and perhaps special damages.

Proving an “avalanche” through indirect evidence can be challenging because common language is so often imprecise and relative. One might imagine that an email from a manufacturing company's CEO to a VP lamenting that “product failures have become an avalanche” would be sufficient evidence for a strong case.

But while the CEO might feel that the number of failures has increased to epic proportions, the actual number of additional failures might in fact be very low. A similar statement made by the CEO of a less diligent company might signify a true disaster in the making. One man's avalanche is another's snow flurry.

Sometimes one really needs to evaluate the numbers to know what is going on.

Many attorneys shy away from asking for and directly analyzing event outcomes. They feel that they lack the expertise necessary to do that sort of analysis and are often unable to hire expensive consultants. In reality though, analyzing event logs and quality assurance reports is not very difficult. Programs like Excel make doing the actual analysis merely a matter of entering a string of data and pushing a function key.

Attorneys' reluctance is usually rooted in a lack of clarity about what exactly one is trying to show by running the analysis. In fact, the objective is very simple: We are trying to quantify the difference between what actually happened and what should have happened.

Stated in a more formal mathematical way, you are trying to demonstrate unexpected or excessive variance from an acceptable mean. This article demonstrates how easy it is to gather and analyze this kind of evidence.

To conduct a quantifiable analysis about the degree of variance in a series of events (e.g., a spate of product failures), one must understand the basics of two concepts: standard deviation and expected (or acceptable) defect rate. Used together, these concepts let the events themselves quantify the magnitude of the problem.

What is standard deviation? Simply stated, standard deviation shows you how much variance exists in a series of events. A low standard deviation indicates that the data points tend to be very close to expectations; a high standard deviation indicates that the data points are spread out over a large range of values.

In litigation, one is often trying to demonstrate that at some point in time events veered from their previous, expected or acceptable pattern. The number of errors, failures or accidents was running at one (lower) level, and after some point in time, those incidents rose to a higher level. The way to demonstrate that the increase in problems was indeed of avalanche proportions is to calculate and compare the standard deviation of the events across two time periods. Because the standard deviation is a number and is expressed in the same units as the underlying data, standard deviations are easy to understand and interpret.

Standard deviation is the measure of how actual events differed from what was expected. To calculate the standard we need to know two things: what actually happened and what was expected to happen. How do we know what was expected? Two related concepts from the world of quality assurance (QA) answer this question. The first is the notion of an acceptable quality level (AQL), and the other is often referred to as the “expected defect rate.” In every organization that claims to have some form of formal quality assurance program, one or another of these numbers has been defined and communicated clearly to those building and delivering the “product.” An organization with an active QA program defines the quality level it must meet to fulfill its objectives and keep its customers happy. Depositions are a great place to learn what these numbers are.

AQL is the smallest percentage of defects that can occur and still make a product “lot” acceptable to customers. This number is used in acceptance sampling, an older but still common method of conducting QA. Companies that employ zero defect testing (a more modern method) to track quality standards use a related number, often called the “expected defect rate.” The expected defect rate is the percentage of defects currently experienced on the production line. Comparing these numbers to the actual number of defects experienced during the time period in question gives you a precise measurement of the difference between what actually happened and what the company had expected and perhaps promised would happen.

Without question, gathering additional evidence about “who knew what and when” from email and loose files is very important to proving a position. But it can be equally valuable to quantify what went on. Quantifying how far events strayed from what was expected and/or acceptable by calculating the standard deviation of actual event outcomes against the previous or expected mean gives you hard evidence of how big or small the snowfall really was.