A number of large, global technology companies' privacy policies are not fully compliant with the European Union's General Data Protection Regulation, according to a recent report released by a European consumer protection group.

The report, published by the European Consumer Organization (BEUC), used artificial intelligence to analyze 14 privacy policies across numerous major tech companies, including those from Google, Facebook, Amazon and Apple.

The BEUC used technology called Claudette, which evaluated the privacy policies and, with the guidance of the researchers, found which language was problematic or confusing.

Through Claudette, BEUC developed a web crawler that monitors privacy policies and those policies are then processed using supervised machine-learning technology. The technology flagged sentences and labeled them under three categories: insufficient information, unclear language and problematic processing.

The report noted that privacy policies “are the main point of reference for civil society and individual consumers when it comes to controlling how personal data is being processed by the data controllers.”

“None of the 14 analyzed privacy policies gets close to meeting the standards put forward by the GDPR,” the report continued. “Unsatisfactory treatment of the information requirements; large amounts of sentences employing vague language; and an alarming number of 'problematic' clauses cannot be deemed satisfactory,” the report said.

In the report, a problematic clause is defined as a clause that is potentially unlawful.

Eleven percent of the sentences, among all of the privacy policies studied, contained confusing terminology, according to the report.

For example, the report indicates that Facebook's privacy policy shows an awareness of the GDPR regulations “but gives rather the impression of the company using … legal terms and buzzwords and catch-phrases, [instead of attempting to construct] a truly user-centric, GDPR compliant policy.”

“Hopefully, they would start taking a more user-centric approach towards these documents, instead of treating them simply as a box to be checked,” the report said. “Moreover, if this study is treated as an inspiration to others, civil society might be soon equipped with artificial intelligence tools for the automated analysis of privacy policies. When this is the case, they will leave no stone untouched, no policy unread, no infringement unnoticed.”