A number of compliance thought leaders have written about the importance of good people to an effective compliance program. Programs just don't work without the right talent. Smart compliance leaders get that. Once companies figure out the people piece (which is not easy), evaluating the risk environment and regulatory requirements helps companies develop programs to address those areas across different subject matters and business units.

Regulatory guidance tells companies to perform risk assessments—focusing on root cause analysis and steps to mitigate risks—but does not provide a government-endorsed road map for how to do it. For instance, the Department of Justice's evaluation of corporate compliance programs (evaluation guidance) sets forth eleven sample topics and questions that the fraud section may consider in evaluating a corporate compliance program. The fifth topic is risk assessment, which includes questions regarding the company's risk management process/methodology, use of information or metrics, and how the process accounts for manifested risks. You get the what and why, but not the how. But you really don't want the U.S. Sentencing Commission or the Department of Justice (or even the Securities and Exchange Commission) telling a company how to conduct a risk assessment. The government realizes that organizations are different and so processes will vary, and these government entities don't really have a reason to develop a detailed risk assessment model. But that does not mean there is not thoughtful and credible guidance out there developed by government agencies.

The most robust government process we have seen is that put forth by National Aeronautics and Space Administration (NASA), the same folks who put a man on the moon and run the space station. They have been doing risk assessments since before Chapter Eight of the sentencing guidelines existed. And there is a lot at stake for the space program—human lives, reputation, and billions of dollars in investment.

The NASA process is publicly available to anyone who wants to take a look. NASA's process is straightforward and includes three basic components:

  1. Identify risks.
  2. Analyze and prioritize risks by comparing the likelihood of occurrence, consequence, impact time frame (sunrise and sunset dates), and impact horizon (impact time frame in relation to the current date):
  • Likelihood is a measure of the possibility that a consequence is realized. You can assess likelihood qualitatively or quantitatively (i.e., in terms of frequency of probability).
  • Consequence (impact) describes the foreseeable, negative impact on the organization's ability to meet performance requirements. You assign a consequence score from 1-5 for each separate consequence category (e.g., performance or cost) and use the highest score as the final consequence score.
  • Impact time frame represents the time when the risk may occur. It consists of two dates: a sunrise date, the earliest time the risk could become realized; and a sunset date, the latest time the risk could become realized.
  • Impact horizon represents an abstract time frame in which the risk may occur based on the dates in the impact time frame. The impact horizon helps to further prioritize risks according to the time frame in which they will occur.

3. Use the risk's likelihood and consequence scores to determine a priority score using a risk matrix. Using the risk matrix to the right, a risk with a likelihood score of 2 and a consequence score of 3 would result in a Priority Score of 11.

The Committee of Sponsoring Organizations of the Treadway Commission (COSO) uses a similar process and adds vulnerability (susceptibility of the entity to a risk event in terms of the entity's preparedness, agility, and adaptability) and velocity (the time it takes for a risk event to manifest itself). https://www.coso.org/Pages/default.aspx. Importantly, COSO also notes that the impact can focus on financial, reputational, or regulatory impact. But both use impact and likelihood to provide a scoring to prioritize different risks within a subject matter and different subject matters against one another. The nuances of the definitions will vary by particular company, but the basic scoring methodology is tried and true.

What is the point of this scoring? Do you need it to rank risks and subject matters? A good risk assessment processes engages functions involved in a subject matter that is within the scope of the particular risk. A simple, uniform, and consistent scoring methodology helps these functions (and the risk assessment particularity from these functions) speak the same language. When the operations people tell you a privacy risk is a 15 because the financial/operational/impact is five out of five and the likelihood is middle of the road at a three, but a trade risk is a 25 because the likelihood of a compliance failure is much higher due to a lack of controls or immature program, the other functions participating in the risk assessment process understand the methodology.

The downside of numerical scoring is that it can disguise the subjectivity of risk scoring. The objective part of the risk assessment includes considering this scoring with objective data points. For instance, if you are developing a risk assessment process for anti-corruption, and your business uses third parties, then the number of third parties engaged by the business and the tasks they perform may be an objective data point on risk. For privacy, an objective data point may be the number of records impacted by data breach incidents. Trade could be the volume of products a company imports on an annual basis. These objective data points can help guide the scoring, but the scoring itself can be highly subjective. That's why it's important to have the right people involved.

Earlier this year, we attended a Superforecasting workshop in Chicago, part of the good judgment project. The crowd was interesting, including people from various industries, academia, government, retirees, etc.—a good representation of people you may find hanging out in the library when more people used to visit libraries. We practiced predicting the likelihood of different events using different inputs and information to make our predictions more accurate. If you were asked to make a prediction whether a couple who was recently married may get a divorce and you were at the wedding, you may predict there was a low chance (especially if it were a fun wedding). But if you weren't at the wedding, you may put the odds closer to the national average. Bias can have a big impact on our predictions. If you know more about the couple, you may put the chances higher or lower. The takeaway for risk assessments for us was to get people who could help you make better predictions or forecasts involved in your risk assessment process. In his book, “Superforecasting: The Art and Science of Prediction,” Philip Tetlock and Dan Gardner wrote, “If forecasters can keep questioning themselves and their teammates, and welcome vigorous debate, the group can become more than the sum of its parts.” Tetlock's model is risk assessment 101.

The entire point of the risk assessment processes is not to score risks (although discussing scores can be fun), it is to address the risks in a thoughtful and consistent way. This is what the regulatory guidance focuses on—root cause analysis and mitigation. A well-designed risk assessment has a process to address root cause and mitigate risk. And a process for tracking action plan completion and how these action plans later impact the risks.

After the Deep Water Horizon rig disaster, Anadarko (one of the companies involved) noted that they had adapted a NASA risk assessment model to address risk. This made perfect sense. The NASA process works; it is adaptable and scalable. And NASA regularly hosts summits and events to educate different industries on risk management. If they can help run the International Space Station, they may have some tools that may improve your risk assessment process.

Ryan McConnell and Meagan Baker are lawyers at R. McConnell Group—a compliance and internal investigations boutique law firm in Houston, Texas. McConnell is a former assistant U.S. Attorney in Houston who has taught criminal procedure and corporate compliance at the University of Houston Law Center. Baker's work at the firm focuses on risk and compliance issues in addition to assisting clients with responding to compliance failures. Send column ideas to [email protected]. Follow the firm on Twitter at @rmcconnellgroup.