Overview
This article is intended for teams using or planning to use AI on their websites and apps. It provides an overview of risk, requirements, governance, and how to achieve compliance.
The majority of obligations fall on providers or developers of high-risk AI systems but developers of apps, including those using AI for educational purposes, may be involved in work that is classed as minimal risk, and is therefore subject to (lighter) obligations.
The following summary of risk and requirements is taken from the summary provided by the European Parliament.
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence.
Unacceptable risk
Don't do these things:
- Cognitive behavioural manipulation of people or specific vulnerable groups
- Social scoring
- Biometric identification and categorisation of people
- Real-time and remote biometric identification systems, such as facial recognition
Article 5 of the act covers prohibited AI practices in detail.
High risk
Category 1 is concerned with products such as lifts, medicines and planes.
Some areas of Category 2 are specialised but others may be of relevance to developers of certain kinds of software.
If you are involved in any of the following, find out what your obligations are:
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
Transparency requirements
Use of Generative AI
While not classified as high-risk, the use of Generative AI, like ChatGPT, will have to comply with transparency requirements and EU copyright law.
You can comply by:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
For example:
High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.
Content that is either generated or modified with the help of AI - images, audio or video files (for example deepfakes) - need to be clearly labelled as AI generated so that users are aware when they come across such content.
EU AI Act: first regulation on artificial intelligence
How should AI generated content be labelled?
What label should be applied to content produced by generative AI? by Epstein et al., considered how nine labels are perceived by people in the US, Mexico, Brazil, India, and China.
The paper is worth a read, here is a flavour of it:
AI generated, Generated with an AI tool, and AI Manipulated are the terms that participants most consistently associated with content that was generated using AI.
However, if the goal is to identify content that is misleading (e.g., our second research question), these terms perform quite poorly. Instead, Deepfake and Manipulated are the terms most consistently associated with content that is potentially misleading.
[Ed. the labels have been highlighted for emphasis.]
Governance
How will the AI Act be implemented?
The AI Office will be established, sitting within the Commission, to monitor the effective implementation and compliance of GPAI model providers.
Next steps
For the majority of app developers, the AI Act will not apply until thirty-six months from the date of the entry into force of the AI Act: 2 August 2027.
Although that may seem a long way off, compliance may not be trivial; we advise you to begin assessing your responsibilities.
In an appendix to the summary, there are a number of use cases you may find useful.
For example:
Education and vocational training: AI systems determining access, admission or assignment to educational and vocational training institutions at all levels. Evaluating learning outcomes, including those used to steer the student’s learning process.
Assessing the appropriate level of education for an individual. Monitoring and detecting prohibited student behaviour during tests.
[Ed. the use case has been highlighted for emphasis.]
This article was published on 18th October 2024.
Links to external references
- EU AI Act: first regulation on artificial intelligence
-
The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law.
Article published by the European Parliament.
- High-level summary of the AI Act
-
Updated on 30 May 2024 in accordance with the Corrigendum version of the AI Act.
In this article we provide you with a high-level summary of the AI Act, selecting the parts which are most likely to be relevant to you regardless of who you are. We provide links to the original document where relevant so that you can always reference the Act text.
The EU Artificial Intelligence Act.
- EU AI Act Compliance Checker
-
The EU AI Act introduces new obligations to entities located within the EU and elsewhere. Use our interactive tool to determine whether or not your AI system will be subject to these.
The EU Artificial Intelligence Act.