More

        

          

    HomeAutomation/AIMEPs negotiate political deal with Council on AI Act for EU

    MEPs negotiate political deal with Council on AI Act for EU

    -

    The legislation is intended to ensure AI implementations are safe, boost business, preserve democracy and the rights of individuals

    On Friday, Members of the European Parliament (MEPs) struck a provisional, political deal with the European Council on a bill designed to ensure that AI in Europe is safe. By which they mean boosts business, preserves democracy and the rights of individuals, and doesn’t cause harm. This might prove a tough circle to square, but it has been received positively in some quarters at least.

    Ilona Simpson, CIO, EMEA at Netskope (pictured), commented, “Most important for me among the announcements…was the efforts that legislators have gone to create balance between regulation and innovation.

    “The Act is the first legislation I have seen that actively encourages innovation by start-ups and SMEs [small- and medium-sized enterprises] – in fact it has explicit provision for it by promoting the use of regulatory sandboxes established by national authorities, as well as real time testing. This will serve to ensure the EU is able to nurture technical advancement without kneecapping it with excessive regulatory limitations.”

    Netskope is a global company that specialises in cybersecurity.

    Enshrines obligations

    The proposed Artificial Intelligence Act will enshrine obligations for AI, based on its potential risks and level of impact. In recognition of AI’s potential dangers, the co-legislators agreed to prohibit:

    • biometric categorisation systems that use sensitive characteristics (such as political, religious, philosophical beliefs, sexual orientation, race);
    • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
    • emotion recognition in the workplace and educational institutions;
    • social scoring based on social behaviour or personal characteristics;
    • AI systems that manipulate human behaviour to circumvent their free will; and
    • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

    Law enforcement exemptions

    Negotiators agreed safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. ‘Post-remote’ RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.

    ‘Real-time’ RBI would comply with strict conditions and its use would be limited in time and location. For example, it could be used for targeted searches of victims of abduction, trafficking and sexual exploitation, and the prevention of a specific terrorist threat.

    It could also be used to locate or identify someone suspected of having committed one of the specific crimes mentioned in the regulation: that is, terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation and environmental crime.

    High-risk systems

    For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), MEPs included certain mandatory measures. They include fundamental rights impact assessment which are also applicable to insurance and banking.

    AI systems used to influence the outcome of elections and voters’ behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that could impact their rights.

    Guardrails for general AI

    To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems and will have to be transparent. For example, drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

    For high-impact GPAI models with systemic risk, Parliament’s negotiators secured more stringent obligations: if these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation

    Measures to support innovation and SMEs

    MEPs wanted to ensure that businesses, especially SMEs, can develop AI solutions without undue pressure from industry giants controlling the value chain. Hence the agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before entry onto the market.

    Sanctions and entry into force

    Non-compliance with the rules can lead to fines ranging from €35 million or 7% of global turnover up to €7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

    Co-rapporteur Brando Benifei (S&D, Italy) said, “It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology.

    “Correct implementation will be key – the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models”.

    Co-rapporteur Dragos Tudorache (Renew, Romania) added, “The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities.

    “It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future”.