Valuation Issues in Acquiring Artificial Intelligence Companies
Although we are still at the relatively early stages of the commercialization of artificial intelligence (AI), it is clear that privacy and security considerations will be at the forefront of measures to regulate AI as industries increasingly adopt and integrate AI tools and store and utilize massive amounts of data generated through such tools.
October 30, 2018 at 12:55 PM
6 minute read
Although we are still at the relatively early stages of the commercialization of artificial intelligence (AI), it is clear that privacy and security considerations will be at the forefront of measures to regulate AI as industries increasingly adopt and integrate AI tools and store and utilize massive amounts of data generated through such tools. As acquirers of AI businesses struggle with how to properly value AI assets, potential liabilities associated with AI, including increased regulation, are making the valuation process even more challenging.
AI is a substantial and rapidly growing driver of M&A activity both in the United States and abroad. And new AI companies are being created and funded at a record pace. In 2017, investors poured $15.2 billion into AI startups, a 141 percent increase over 2016, according to CB Insights. That pace has continued into 2018, with Venture Scanner reporting that Q2 2018 saw a record $4.4 billion invested in AI companies, a 19 percent increase from the same period in 2017. During the first quarter of 2018, 20 percent of earnings calls of U.S. publicly listed companies discussed AI, according to a Bain & Co. study. Management and corporate development teams at companies engaged in a broad range of industry sectors are now encouraged to consider adopting AI solutions. The prospect of significant gains in efficiency and cost reduction, as well as concerns that competitors are investing in tools that could upend the status quo motivate this heightened interest.
This competitive tension is evident in a swathe of recent aqui-hire deals that value talent at between $5 million and $10 million per AI expert, according to a PitchBook study. But as the number and variety of AI-use cases grow, the methods for valuing target companies that fit within the broad AI umbrella has also expanded beyond the traditional talent metrics. Amazon, Google and Microsoft have begun offering enterprise AI solutions that act as an alternative to M&A for established companies looking to build out an AI capability. Accordingly, the talent-based valuation metrics, which are generic and tend to be established by serial acquirers in deals for cutting-edge technology, are being weighed against the cost of building out an AI capability in-house, using third-party AI tools such as Amazon AI, Google's Cloud AutoML or IBM's Virtual Assistant. As the capabilities of AI processes become better understood within industries, potential acquirers in M&A transactions are also increasingly able to produce valuations based on the efficiencies that they expect the underlying technology will bring to their business. In addition, some AI startups are able to demonstrate customer and user results as well as cross-selling opportunities through use of their AI tools, which can be another source of valuation data. As the AI M&A market matures, acquirers are establishing valuations using a combination of these valuation factors rather than the simple talent metrics that defined the early market.
Despite these strong drivers for building AI capabilities, acquirers would be wise to apply caution when approaching AI M&A prospects. Even as advances in AI solutions are opening exciting new value propositions for many companies, increasingly regulators are pressed to respond to demands by their constituents to enact stricter regulations on the collection and use of personal data.
The EU's General Data Protection Regulation (GDPR) was first published in 2015 and came into effect in all EU member states in May 2018. Companies that transfer, process or maintain the data of EU residents must adhere to the new standards of GDPR. The GDPR represents the current high-water mark for regulation of data security and stands at the opposite end on the spectrum of regulatory approaches to those adopted in China. China's approach is designed to foster accelerated development of AI and reflects a lower concern for protection of personally identifiable information. In the United States, the regulatory approach is evolving but there is evident tension between the opposing considerations of international competition for AI talent (in what is widely referred to as the AI arms race), and the demands by constituencies for protection of personal information. With the adoption of the California Consumer Privacy Act (CCPA) on June 28, California became the first state to adopt comprehensive regulations that establish the rights of consumers regarding control of their personal information, with personal data rights that track a number of the guiding principles contained in the GDPR. Other recent efforts to regulate AI include California SB 1001, signed into law on Sept. 28, SB1001 requires that a person who uses a bot in online communication to incentivize a purchase or sale of goods or services or to influence an election must disclose in that communication that they are using a bot.
In addition to regulatory compliance considerations, adopters of AI need to be cognizant of the reputational exposure that use and manipulation of big data through AI tools brings. Through the wide publicity generated by data breaches, public awareness of the way that companies safeguard and utilize information collected from customers and users has never been more intense. Whether that reputational exposure is at a general public level or at a narrower, customer or industry level, care should be taken in formulating data protection strategies (and in evaluating those of potential targets) to understand vulnerabilities beyond mere regulatory compliance.
As the upward trend in AI-related deal activity continues, corporate leaders across a vast array of sectors are becoming more educated in data protection matters. Deal professionals should prioritize privacy and data protection due diligence with any target having AI tools or extensive data sets, especially in light of the rapidly evolving regulatory landscape. Acquirers should also consider potential risks and liabilities associated with the integration of data sets and AI tools of potential targets with the acquirer's existing businesses. Despite advances in developing methodologies for valuing AI companies, values tend to be inconsistent and often come down to how desperate an acquirer is to obtain the technology or prevent that technology from falling into the hands of a competitor.
Craig W. Adas is managing partner of Weil, Gotshal & Manges' Silicon Valley office and a member of the corporate department. His practice focuses on mergers and acquisitions, private equity and securities, with particular emphasis on private and public acquisitions, leveraged buyouts, dispositions and joint ventures.
Alex Purtill is an Associate at the firm in the Silicon Valley. He participates in the representation of financial and strategic clients in various acquisition transactions, including public and private mergers and acquisitions, divestitures and cross-border matters.
|This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllPatent Trolls Come Under Increasing Fire in Federal Courts
Co-Founder and Startup Divorce: Hope for the Best, Prepare for the Worst
'Get Laid Off With Me' on TikTok: What Employers Must Know About This New Trend
5 minute readTrending Stories
- 1Judge Denies Sean Combs Third Bail Bid, Citing Community Safety
- 2Republican FTC Commissioner: 'The Time for Rulemaking by the Biden-Harris FTC Is Over'
- 3NY Appellate Panel Cites Student's Disciplinary History While Sending Negligence Claim Against School District to Trial
- 4A Meta DIG and Its Nvidia Implications
- 5Deception or Coercion? California Supreme Court Grants Review in Jailhouse Confession Case
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250