On Nov. 8, the California Privacy Protection Agency Board voted to adopt new rules for data brokers, or businesses that collect and sell personal consumer data to third parties, and advance a raft of proposed rules for privacy risk assessments, cybersecurity audits and the regulation of artificial intelligence technologies such as automated decision-making technology.

ADMT, according to the CPPA, refers to “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making.” ADMT can be used by businesses to engage in the practice of "profiling," or evaluating and predicting consumer traits such as their work performance, health or economic situation based on their personal data.

The CPPA, which first drafted these regulations a year ago, was established in November 2020 to enforce the California Consumer Privacy Act and the Delete Act, which awards California residents the right to request that data brokers delete their personal information. The new data broker regulations seek to clarify provisions in the Delete Act, which requires data brokers to register with the CPPA, while the proposed rulemaking package aims to mandate cybersecurity audits and privacy risk assessments for certain businesses and grant consumers the right to access and opt out of businesses' use of ADMT.

The data broker regulations, pending approval by the Office of Administrative Law, will go into effect on Jan. 1, 2025, and the proposed rulemaking package on AI, cybersecurity and privacy will move to a 45-day formal public comment period following the CPPA's vote.

Brandon Reilly, the leader of the privacy and data security practice at Manatt, Phelps & Phillips, spoke with The Recorder this week to discuss his takeaways from the board's decision and the potential impact of the proposed regulations on future litigation in the AI, privacy and cybersecurity spaces.

The following has been edited for length and clarity.

How do these proposed rules depart from or build on previous iterations of AI and cybersecurity regulations, for example, the sweeping AI regulation bill in California, Senate Bill 1047, that Gov. Gavin Newsom vetoed in September?

That bill was focused really directly on developers. So you can think of a lot of actual and proposed AI regulations as focused on what one or two audiences. … The first would be a developer, so a company that develops an AI model or a platform. And the second is generally called deployers, so any business or organization that looks to deploy an AI system or to implement it in their existing products or services. So the bill that the governor vetoed was intended to directly regulate developers. And, for that reason, I think there was a lot more focus on the impact on innovation, because so many of those developers are U.S. firms and many of them are California-based firms.

The CCPA's general applicability is on any business, any for-profit business that collects personal information from California residents. So that really is the universe of entities, which is quite large, that can be regulated by these [rules]. … So it is, in some ways, more targeted because it's within the universal CCPA regulation, but in many ways it's actually more expansive than the bill that was just vetoed, because that was really on firms that were developing AI models.

How do you foresee these regulations potentially affecting litigation in this space?

The litigation is going to follow the trajectory of litigation under the CCPA in general, which is actually somewhat limited, and that's because there is no private right of action under the CCPA, well, I should say that there's a limited private right of action that's focused only on data breaches. Everything else in the CCPA, which is just an incredibly extensive and comprehensive list of data privacy-related requirements—everything from privacy notices to data subject rights, like the rights to delete, to regulation in terms of advertising and data sales—all of those obligations, as they exist now, do not have a private right of action. So a violation of those requirements is solely enforceable by the CPPA and also the California Office of the Attorney General.

The data breach provision, which does authorize litigation directly … has been tested recently. Originally, it was only used in the context of the traditional data breach. And we are seeing plaintiffs start to more creatively attempt to use that private act of action so that it applies to what we would consider more of a privacy violation—things like allegedly unauthorized website tracking. … So the result is that these regulations, as well, would not have a private right of action. If you violate them, [they're] not going to be subject to a direct right of action. The way that it can become relevant in litigation is you do see litigants assert a claim under the California Business Code Section 17200 [the Unfair Competition Law], because that authorizes claims for underlying violations of the law, even if they don't have private right of action. … That may well happen with these regulations.

I would say another piece of this, and I think this is one of the most impactful aspects of these proposed regulations, is that they're asking regulated businesses to file bridge reports of the impact assessments that they're due … and then, on request, businesses must provide a full-on bridge impact assessment. And so what that means is that, of course, it's going to increase visibility, significantly, of the regulator into each regulated business's data practices. It's also possible that these reports could be discovered in civil discovery or subject to a subpoena. And that's certainly what the most creative plaintiffs … would do.

How are you advising clients to proceed in the wake of these regulations being approved?

We do have the benefit of time, because there is still this formal rulemaking process. … In response to the public comment period, there may be further changes with those regulations. And the dialogue between the board members the other day on Friday suggests that there will be some revisions. … So that needs to take place. There are financial impact reviews that need to happen.

California has a robust regulatory procedure law in place that this has to go through. So there is some time. But it's never too early for companies to start thinking about this. In fact, what a lot of companies are facing already are similar [international requirements, for example, the privacy impact assessments required by the European Union's General Data Protection Regulation].

So for those international businesses … some of these requirements are going to be familiar to them. They're not the same thing, but we are starting to work with regulated businesses on thinking about how this might change coexisting governance processes that they have in place to meet those international regulations. I'd also note that there are … state regulations, as well, in other states beyond California that require these privacy impact assessments. The difference is that in those statutes, the impact assessments are not a question. So it's really a bare-bones obligation. …

So what we're working with companies on and will be unveiling in the next several months is products that will help them in their governance solution. … Many companies that have to roll this out at scale benefit from having a tried-and-true, set [procedure]. And that's what they're starting to build.

What other takeaways do you have from the board's decision?

I think … the public reporting aspect of this is quite new, and it's the kind of requirement that is going to really catch the attention of risk officers, compliance officers, legal officers simply because it requires that certification of compliance. It requires the disclosure of a certain level of information. …

I think, too, the other headline here is that privacy laws continue to be the front line of regulating AI. And that's not to say that privacy law is the only way to regulate AI technologies, that's certainly not true. But for, I think, a variety of reasons, existing state privacy laws have really facilitated and enabled states to regulate AI systems somewhat quickly. And I think that's the case for a few reasons. I think one is that these laws are all relatively new anyway, and so [they have] the attention of lawmakers and regulators.

But two, a lot of the risk controls that exist for privacy in general—so things like those … impact assessments … are easily translatable to attempts to regulate AI technology. And so I think what we're seeing is regulatory bodies and statutory frameworks that are set up to regulate data generally have proven to be an attractive model for AI regulation.