The European Commission has left the door open to using AI for facial recognition, defying expectations that the bloc would impose a total ban on the controversial technology.

Margrethe Vestager, the EU's digital policy chief, unveiled a strategy for artificial intelligence on Wednesday that aims to boost the use of AI in Europe while addressing concerns about data privacy.

"We want every citizen, every employee, every business to stand a fair chance to reap the benefits of digitalization," she said in a statement.

Vestager was expected to propose a ban on the use of AI for facial recognition in response to increasing public concern about how the technology threatens personal privacy.

Instead, the commission called for a debate on how exemptions could be made from the EU's strict data privacy rules to allow the use of AI for facial recognition.

"While today, the use of facial recognition for remote biometric identification is generally prohibited and can only be used in exceptional, duly justified and proportionate cases, subject to safeguards and based of EU or national law, the Commission wants to launch a broad debate about which circumstances, if any, might justify such exceptions," the report said.

A draft of the strategy from January called for a ban of up to five years on the use of AI for facial recognition.

The earlier draft said the "use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. three to five years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed."

The strategy unveiled today takes a two-tier approach to AI by calling for strict rules for high-risk uses and a voluntary labeling scheme for low-risk applications.

"As AI systems can be complex and bear significant risks in certain contexts, building trust is essential," the EU report says.

"Clear rules need to address high-risk AI systems without putting too much burden on less risky ones. Strict EU rules for consumer protection, to address unfair commercial practices and to protect personal data and privacy, continue to apply," it says.

It also says that for high-risk cases, such as in health, policing, or transport, "AI systems should be transparent, traceable and guarantee human oversight."

For lower-risk AI applications, the Commission envisages a voluntary labeling scheme if they apply higher standards.

The commission on Wednesday also unveiled its digital strategy for the next five years, which aims to boost the use of data by EU companies so that they can compete better with U.S. and Asian companies that tend to be more advanced in the use of data.