Artificial intelligenceToday, virtual reality is a game. There are a few apps that have some nifty experiences of being on a beach, at the top of the Empire State Building, skydiving and the like. In the next decade, we will see significant expansion of the number, type and capabilities of virtual environments that we can spend time in. Artificial intelligence will make those environments feel even more real; it will create entirely novel people (in the form of avatars) and places, or it will recreate what is familiar to us, and enable us to walk down the streets of our childhood, “live” in houses that replicate the ones we grew up in—in short, have experiences in virtual worlds. These worlds will have virtual relationships, virtual communities and virtual economies, virtual places for work and leisure.

Most of us will create avatars—our “other” selves—that will become as ubiquitous as our personal email accounts. Our virtual experiences will be moderated by our avatar selves: on our behalf, they will walk the virtual streets, live in whatever virtual environment we choose.

As artificial intelligence increasingly displaces human workers, more time will free up to accompany our avatars into their virtual excursions. We are already in the midst of a slow descent out of the brick-and-mortar world and into a digital sphere—it is happening incrementally in our attachment to our cellphones. Many people already live large segments of their lives in digital environments, attached to social media, head down, looking at whatever occupies them and allows for escape.

Virtual worlds will not be crime free. Why would they be? The avatars we create will mirror our population, and while the virtual worlds may have rules (read “laws”) imposed by their technologist and gaming company creators, people have always found ways to intentionally or unintentionally break rules. Already in gaming we see crimes that parallel those in our brick-and-mortar worlds: fraud schemes in which some “players” steal merchandise acquired by others, some who harass and stalk, some who are creepy and try really weird things. But in the virtual world, there will be new forms for these crimes, and legitimate questions about the nature and quantum of harm suffered.

Many argue that the nature of harm—while real—bears no comparison to harms suffered from crime committed in our brick-and-mortar world. This is certainly true for causing the “death” of an avatar in a game environment, versus tangible physical harm in the real world. But crimes of theft bear closer resemblance: stealing an avatar's merchandise (such as tokens that for some environments may be used to acquire items that are valuable—for instance, tools, clothing or accessories, housing or vehicles, weapons, or currency), has real practical consequences and impact, and impact on the avatar's human (who may have spent currency from the real world to acquire the merchandise, or have spent time and energy doing tasks necessary to earn it). Theft in the virtual world causes many (but not all) of the same harms as in the brick-and-mortar world.

Similarly, fraud can be committed in both the virtual and brick-and-mortar worlds, resulting in costs born by the victims and communities in which they occur.

If we know that crimes will occur in emerging virtual worlds, and we should not expect that gaming companies and creators will have the desire or ability to address them (or even to recognize them as such), it is reasonable for us to start considering whether there will need to be a form of criminal justice in virtual worlds. Today, given how nascent AI-enhanced virtual reality is, this question is purely theoretical. But more complex virtual environments will evolve quickly and it is worth asking some of the difficult questions now. It is already clear that humans become attached to their avatar selves; they have feelings of pleasure, hope, frustration and sadness associated with their trials and tribulations. Crimes against an avatar have real-world impact on their humans.

There will be the rules imposed by the game/environment creator/distributor. Rules dealing with theft of merchandise or inappropriate conduct exist today and will persist tomorrow. But even today, enforcement can be complicated and even spotty or non-existent.

In a far more active set of virtual worlds, with millions of people “playing” or experiencing an environment simultaneously, can a company really be expected to act as a private enforcer? At some point, we will have to make choices about the rules we choose to live by in these worlds: Will we choose to have a Hobbesian world in which the strong survive? Where there are no imposed rules, but each participant fends for him or herself? Will we choose a modest set of rules that have essentially binary consequences: good behavior allows you to be in the world, misbehavior results in banishment? Will we have a world ruled by ad hoc vigilantism, or will we find that we all spend so much time in these worlds that we want something more formalized? If we do want rules, who or what is the body—akin to a legislature—that makes them? Do avatars/their humans have any say at all in who or what will govern these environments?

It is hard to imagine that in virtual environments we would be able—or want to—graft on them the rules we are used to and that apply to the brick-and-mortar world. Why not use a large-scale entry into virtual environments as an opportunity for remaking criminal justice and civil law generally?

Investigation and enforcement also present unique issues in virtual worlds. Prisons don't exist—and why would we recreate such institutions if we have the opportunity to start from scratch? In all events, how could they ever be more than the equivalent of “banishing” an avatar from an environment? It would of course be possible to create real world penalties for virtual world crimes—and with fraud and even possibly certain forms of theft in the virtual world, this is not so hard to imagine. But what about crimes that are unique to the virtual environments and that don't have a real-world analogue: for instance, harassment and anti-social behavior in a virtual environment?

This all seems far-fetched, I know. But when you think about a future with more and more virtual reality powered by artificial intelligence, it is worth considering that people won't suddenly behave better and differently there than they do here. The question is, will we want to do anything about it and can we?

It is my belief that over time several things will occur that will result in the creation and enforcement of a type of criminal justice in virtual worlds. I believe the rules are likely to develop in partnership between the companies creating the environments and those who populate and use them first. These pioneers will set basic rules that will result in penalties such as banishment for a series of numerated offenses (repeated theft, repeated fraud, extraordinary behavior for an environment which may include harassment, etc.). Penalties for theft may include an inability to access certain areas or one's own items unless repayment has occurred. And forms of community service may be easier to imagine or enforce in relatively tangible ways (e.g., performing tasks of value for others as recompense for rule-breaking).

At the beginning of a new technological era, which virtual reality powered by AI presents, there are opportunities. Here, the opportunities to understand what rules we will want (or not want) to live by are real. Powerful virtual environments in which large numbers of people spend time are clearly several years away, but a robust dialogue between now and then will help us gather our thoughts and our philosophies, and craft our answers to these new challenges.

Katherine B. Forrest is a partner in Cravath, Swaine & Moore's litigation department. She most recently served as a U.S. District Judge for the Southern District of New York and was the former Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice.