No fewer than 47 tech companies—including Apple, Microsoft, Google and WhatsApp—signed their names to an open letter last week urging the United Kingdom's Government Communications Headquarters (GCH) not to move forward with a proposal that would allow the furtive addition of law enforcement participants to encrypted group chats or calls.

The concerns outlined in the letter will be familiar to anyone who has followed the ongoing efforts to find some kind of middle ground between privacy and the needs of law enforcement. Some believe, however, that the GCH's proposal has the potential to create new threats while still invoking the same human rights concerns.

“I would say that [the GCH proposal] actually escalates all of the tensions that are present now. It really opens it up to a much larger security risk,” said Jason Rebholz, a founder of the specialized claims management firm MOXFIVE.

The source of much of that original tension has to do with the encryption protecting communications channels and just how impenetrable it should be. Tech companies balked last fall after Five Eyes, an intelligence alliance comprised of Canada, New Zealand, Australia, the United Kingdom and the United States, issued a communiqué advocating for the installation of law enforcement accessible backdoors to encryption.

The companies argued that a backdoor of any kind can still be opened by the wrong people. The GCH proposal attempts to sidestep the issue by making the case that tech companies wouldn't have to touch their encryption in order to silently add law enforcement to a group chat or call. According to Rebholz such a measure would mean completely bypassing the encryption altogether. But there's also the potential for new vulnerabilities to be inadvertently created.

“When you're introducing this type of functionality, it's a very heavy undertaking from the coding perspective. It's going to open it up for a greater risk of just coding errors that others can take advantage of and essentially hack into,” Rebholz said.

He thinks that the scariest part is that those vulnerabilities in turn could subvert the very nature of the function itself, allowing bad actors to slip into conversations or calls unannounced and unnoticed. There's also a possibility that the reward doesn't justify the risk.

Jarno Vanto, a partner in the privacy and cybersecurity group at Crowell & Moring, said the GCH proposal still affords the same potential for governmental misuse that has dogged more traditional back doors.

“Half these tools—and this always happens—they're leaked out there and then they will be used by governments who are not Western human rights member governments,” he said.

Meanwhile, the “ghost” function's original intention may quickly become moot. After all, the ability to silently monitor a platform isn't very useful if you're the only one there. Users—especially potential bad actors—could be inclined to abandon a messaging platform if they suspect that the integrity of their communications has been compromised.

Dan Greene, a certified information privacy professional with Beckage, doesn't think that user awareness will begin and end with the content of their messages much longer. He foresees a rising concern with regards to the metadata that can be mined from those communications.

“What can you determine about who I am, where I am, how frequently I'm communicating with somebody, without knowing the content of my information. And I think that that will sort of be the next wave,” Greene said.