As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.
Parliament's priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes. Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
Unacceptable Risks :
Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU's product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI
Generative AI, like ChatGPT, would have to comply with transparency requirements:
Limited Risks :
Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
Image : This illustration of artificial intelligence has in fact been generated by AI c Limitless Visions
ArtDependence Magazine is an international magazine covering all spheres of contemporary art, as well as modern and classical art.
ArtDependence features the latest art news, highlighting interviews with today’s most influential artists, galleries, curators, collectors, fair directors and individuals at the axis of the arts.
The magazine also covers series of articles and reviews on critical art events, new publications and other foremost happenings in the art world.
If you would like to submit events or editorial content to ArtDependence Magazine, please feel free to reach the magazine via the contact page.