EU Governments Adopt Negotiating Position on AI
Artificial intelligence systems should be safe and respect human rights, EU telecom ministers said Tuesday, agreeing on a negotiating stance on the proposed EU Artificial Intelligence Act. Among other things, the European Council narrowed the original European Commission definition of an AI system to those developed through machine learning approaches and logic- and knowledge-based approaches to distinguish them from simpler software systems. It extended the prohibition against using AI for social scoring to private actors and broadened the bar against using AI to exploit vulnerable people to those who are vulnerable due to their social or economic situation. The council also clarified when law enforcement agencies should, in exceptional cases, be allowed to use real-time remote biometric identification systems in public spaces; and added protections to ensure high-risk AI systems aren't likely to cause serious fundamental rights breaches. A new provision addressed situations where AI can be used for many different purposes (general purpose AI) and where such AI technology is then integrated into another high-risk system. The council version explicitly excluded national security, defense and military purposes from the scope, as well as AI used solely for research and development. It also set more proportionate caps on fines for small and mid-sized businesses and start-ups. The legislation needs approval from the council and the European Parliament, whose negotiating stance hasn't been finalized. The negotiating document sparked criticism from a consumer group and a member of the European Parliament. The European Consumer Organisation said ministers "reached a disappointing position for consumers" by leaving too many key issues unaddressed, such as facial recognition by private companies in publicly accessible places, and by watering down provisions about which systems would be classified as high risk. It urged EU lawmakers to stand up for consumers. One legislator, Patrick Breyer of the Greens/European Free Alliance and Germany, agreed. The Council approach is "extremely weak" on the use of AI for mass surveillance purposes, he emailed: "With error rates (false positives) of up to 99%, ineffective facial surveillance technology [bears] no resemblance to the targeted search that governments are trying to present to us."