(Photo: Shutterstock)

Academics have been debating for a while whether machines can be inventors for the purposes of patent law. Earlier this month, University of Surrey IP professor Ryan Abbott and others upped the ante, forming the Artificial Inventor Project and filing patents around the world that list an AI machine as the inventor.

The USPTO, which convened a conference earlier this year on AI and IP, is now formally requesting comments from the public on patenting artificial intelligence inventions. PTO Deputy Director Laura Peter publicized the request in a blog post Monday, highlighting four sample questions the agency intends to address. We've asked a couple of experts—Fish & Richardson senior principal John Dragseth and Winston & Strawn's managing partner for Silicon Valley, Kathi Vidal, for some quick takes.

For further reading, check out Abbott's articles on AI inventorship and on the level of ordinary skill in an AI world.

PTO QUESTION 1: Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an AI invention or any other invention?

John Dragseth | Fish & Richardson | Senior Principal: I think the current law is flexible enough to handle "machine-heavy" inventions. The law simply asks what humans made a meaningful contribution to the idea of the invention—that's plenty flexible, as long as we assume there has to be a human inventor somewhere. We are going to end up with situations where the "invention" occurs long after the inventor completes her work—maybe even after she dies. Out of context, an already-dead inventor sounds silly. But if you understand that she's getting credit for the computer system she set up—and as long as we make sure there's proximate cause between her contribution and the invention—all should be good.

The example I always use is an AI system for mixing and selecting plant or drug candidates in massive numbers, where someone or someones smart build the system (probably a biotech person and a computer science person), and then lab techs watch the system to select the candidates. If the selections are "dumb," e.g., select any plant that is 3 inches tall or taller after a week, then the first people are the co-inventors—even if they are long gone. If the selections are "smart" and require judgment, then someone needs to figure out if the lab tech should also be an inventor. It's a very flexible legal framework, and rightly so, because there are a million tricky issues on inventorship that go beyond AI.

Also, the current law is not punitive—i.e., if you mess up inventorship, it can generally be fixed as long as you were acting in good faith. So tricky inventorship issues don't generally get me too worked up in my day-to-day practice.

The one exception is when you have ownership problems. Fixable inventorship problems become unfixable ownership problems, if all the potential inventors don't have an obligation to assign. Everything goes from sunny to rainy immediately when that happens. To that end, it's especially important for companies who develop AI systems to have good employment or consulting agreements because inventions can occur so far down the road, so they aren't forced to jump through all sorts of hoops 20 years down the road when the "computer" starts inventing things.

Side note: The PTO's question refers to what happens when "an entity or entities other than a natural person contributed to the conception of an AI invention." That can't happen, at least not for a while. Machines cannot have ideas, so they don't contribute to an invention. Even if you want to stretch the concept of what an "idea" is, we already have doctrines holding that one person's work can "inure to the benefit" of another person—so it wouldn't be too hard to make the work of a machine inure to the benefit of the person whose ideas put the machine in motion.

Kathi Vidal | Winston & Strawn | Managing Partner for Silicon Valley: The PTO's questions hit some of the hot issues percolating around AI and our patent system. Though any proposals or changes to the law will have to withstand, at some point, Constitutional-level scrutiny including on what our Founding Fathers meant by empowering Congress to secure for limited times exclusive rights to "inventors," the bigger issue in my view is whether expanding the definition of "inventor" to include AI would advance the constitutional objective of "promot[ing] the progress of … [the] useful arts."

Or, on the other hand, would allowing for the patenting of inventions conceived by AI (which AI could do with relative speed and ease and little expense) threaten to preempt fields and lock-up technology in such a way that stunts progress? There is enough of a divide between the incentives needed to encourage innovation in the bio and pharma space (just read the Federal Circuit's recent Athena v. Mayo decision) compared to the electrical, computer and mechanical arts. Adding AI to the mix may stretch our "one-size-fits-all" patent system to its breaking point.

PTO QUESTION 2: Are there any patent eligibility considerations unique to AI inventions? Dragseth: Generally, under Section 101, you are looking for particular physical inputs or particular physical outputs for the mental or computing work performed by a person (e.g., in Mayo) or a computer (e.g., in Alice). AI inventions are often carried out entirely inside the computer, so that can create some issues. But it's not impossible, as the Federal Circuit has found "inside the computer" inventions eligible when they do cool enough stuff.

Vidal: The judicially-created exceptions to patentability are premised in some respects on a fear of locking up technology or preempting a field. We don't allow for the patenting of mathematical formulas, because doing so would preempt every use of that formula. The issue with AI inventions is not that any given invention would preempt a field, but that the speed with which AI could invent raises similar concerns. For this reason, patent eligibility, like other aspects of our patent system, needs to be carefully rethought.

PTO QUESTION 3: Does AI impact the level of a person of ordinary skill in the art? Dragseth: I don't think AI has a direct effect on the level of skill in the art because, as I noted above, I think the inventors are the people who set the computer system in motion and perhaps the people who interpret the computer system's actions (if that interpretation is meaningful in character).

But a lot of AI inventions involve applying computer smarts to some other problem, like drug selection, or signal processing, or other discrete technologies. So you are more likely to have an inventive "team" whose skill sets don't overlap much. That can make it really hard to provide a simple definition of the skilled artisan like we are accustomed to—e.g., "a person with an electrical engineering degree and four years' experience in the field designing X." But we already have cases with invention teams (shoot, there was a Canon inkjet case once with a couple dozen inventors, and the Federal Circuit was cool with that). AI just means we'll have more.

Vidal: If the level of ordinary skill in the art is revised to the level of AI, then the exercise of determining obviousness could become circular. If operating on a data set, AI combines two references, then one could say that, under an AI standard, those concepts were obvious to combine. If AI invents based on a given data set, then one could argue that it would have been obvious from the perspective of AI to come up with that invention. Because AI can readily analyze the results of each combination and adjust its next combination, the whole hindsight analysis collapses. In a way, AI has hindsight from the get go. (My partner Chuck Klein adds: "Using AI might also affect reasonable expectation of success, and redefine "finite" number of potential solutions in the obvious-to-try context.")

PTO QUESTION 4: Do the disclosure rules (enablement, specification, etc.) need to be altered for AI-related patent applications?

Dragseth: Perhaps I haven't thought enough about this, but I think the disclosure rules are plenty flexible as they already stand, too. First, they are directed to what the person reading the patent can achieve—not to who, what or how the invention was made. Now, I've been presented with some crazy-smart inventors in the AI space, so that certainly makes it harder to write a good, fulsome patent application. But that's true in all complex technologies.

I would like to see the PTO open "disclosure" rules to permit the filing of videos and such, which can be critically useful in understanding complex subjects. I assume that, if a picture is worth a thousand words, then a 1,000-frame video is worth a million. That's not a function of AI though, but more a function of other types of "interactive" inventions we see and the ability to have an easily-accessible host for video that would meet disclosure requirements (e.g., putting a YouTube link in a specification, as long as YouTube keeps the video there for 20 years).

Vidal: Every rule needs to be rethought when it comes to AI and all the data the AI analyzed to come up with the invention, including not only enablement and the specification, but also which pieces of prior art must be disclosed to the PTO and whether AI must not only disclose that art (which could be impossibly voluminous), but also must rate the art based on how it weighed the references during its design process. If we are going to reward AI inventions, we need to make sure the public receives the appropriate quid pro quo.